<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.reliawiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sharon+Honecker</id>
	<title>ReliaWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.reliawiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sharon+Honecker"/>
	<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php/Special:Contributions/Sharon_Honecker"/>
	<updated>2026-04-25T18:48:23Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.0</generator>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=One_Factor_Designs&amp;diff=65269</id>
		<title>One Factor Designs</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=One_Factor_Designs&amp;diff=65269"/>
		<updated>2017-08-29T00:07:39Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Box-Cox Method */ replaced recommended transformation picture with html table&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Doebook|5}}&lt;br /&gt;
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], the analysis of observational studies involves the use of regression models. The analysis of experimental studies involves the use of analysis of variance (ANOVA) models. For a comparison of the two models see [[One_Factor_Designs#Fitting_ANOVA_Models|Fitting ANOVA Models]]. In single factor experiments, ANOVA models are used to compare the mean response values at different levels of the factor. Each level of the factor is investigated to see if the response is significantly different from the response at other levels of the factor.  The analysis of single factor experiments is often referred to as &#039;&#039;one-way ANOVA&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
To illustrate the use of ANOVA models in the analysis of experiments, consider a single factor experiment where the analyst wants to see if the surface finish of certain parts is affected by the speed of a lathe machine. Data is collected for three speeds (or three treatments). Each treatment is replicated four times. Therefore, this experiment design is balanced. Surface finish values recorded using randomization are shown in the following table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.1.png|center|410px|Surface finish values for three speeds of a lathe machine.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be stated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}={{\mu }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model assumes that the response at each factor level, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, is the sum of the mean response at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, and a random error term, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. The subscript &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; denotes the factor level while the subscript &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; denotes the replicate. If there are &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of the factor and &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates at each level then &amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j=1,2,...,m\,\!&amp;lt;/math&amp;gt;. The random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are assumed to be normally and independently distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. Therefore, the response at each level can be thought of as a normally distributed population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and constant variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. The equation given above is referred to as the &#039;&#039;means model&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The ANOVA model of the means model can also be written using &amp;lt;math&amp;gt;{{\mu }_{i}}=\mu +{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the effect due to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Such an ANOVA model is called the &#039;&#039;effects model&#039;&#039;.  In the effects models the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, represent the deviations from the overall mean, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;. Therefore, the following constraint exists on the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;s: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Fitting ANOVA Models===&lt;br /&gt;
&lt;br /&gt;
To fit ANOVA models and carry out hypothesis testing in single factor experiments, it is convenient to express the effects model of the effects model in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt; (that was used for multiple linear regression models in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This can be done as shown next. Using the effects model, the ANOVA model for the single factor experiment in the first table can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment effect. There are three treatments in the first table (500, 600 and 700). Therefore, there are three treatment effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{3}}\,\!&amp;lt;/math&amp;gt;. The following constraint exists for these effects:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   } {{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the first treatment, the ANOVA model for the single factor experiment in the above table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{1j}}=\mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+0\cdot {{\tau }_{3}}+{{\epsilon }_{1j}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;, the model for the first treatment is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}-0\cdot ({{\tau }_{1}}+{{\tau }_{2}})+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{or   }{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Models for the second and third treatments can be obtained in a similar way. The models for the three treatments are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{First Treatment}: &amp;amp; {{Y}_{1j}}=1\cdot \mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{Second Treatment}: &amp;amp; {{Y}_{2j}}=1\cdot \mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{2j}} \\ &lt;br /&gt;
\text{Third Treatment}: &amp;amp; {{Y}_{3j}}=1\cdot \mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{3j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coefficients of the treatment effects &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; can be expressed using two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the indicator variables &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, the ANOVA model for the data in the first table now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{x}_{1}}\cdot {{\tau }_{1}}+{{x}_{2}}\cdot {{\tau }_{2}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation can be rewritten by including subscripts &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; (for the level of the factor) and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; (for the replicate number) as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{x}_{i1}}\cdot {{\tau }_{1}}+{{x}_{i2}}\cdot {{\tau }_{2}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation given above represents the &amp;quot;regression version&amp;quot; of the ANOVA model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Treat Numerical Factors as Qualitative or Quantitative?===&lt;br /&gt;
&lt;br /&gt;
It can be seen from the equation given above that in an ANOVA model each factor is treated as a qualitative factor. In the present example the factor, lathe speed, is a quantitative factor with three levels. But the ANOVA model treats this factor as a qualitative factor with three levels. Therefore, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required to represent this factor. &lt;br /&gt;
&lt;br /&gt;
Note that in a regression model a variable can either be treated as a quantitative or a qualitative variable. The factor, lathe speed, would be used as a quantitative factor and represented with a single predictor variable in a regression model. For example, if a first order model were to be fitted to the data in the first table, then the regression model would take the form &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. If a second order regression model were to be fitted, the regression model would be &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}x_{i1}^{2}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. Notice that unlike these regression models, the regression version of the ANOVA model does not make any assumption about the nature of relationship between the response and the factor being investigated.&lt;br /&gt;
&lt;br /&gt;
The choice of treating a particular factor as a quantitative or qualitative variable depends on the objective of the experimenter. In the case of the data of the first table, the objective of the experimenter is to compare the levels of the factor to see if change in the levels leads to a significant change in the response.  The objective is not to make predictions on the response for a given level of the factor. Therefore, the factor is treated as a qualitative factor in this case. If the objective of the experimenter were prediction or optimization, the experimenter would focus on aspects such as the nature of relationship between the factor, lathe speed, and the response, surface finish, so that the factor should be modeled as a quantitative factor to make accurate predictions.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;i&amp;gt;Y&amp;lt;/i&amp;gt; = &amp;lt;i&amp;gt;X&amp;amp;Beta;&amp;lt;/i&amp;gt; + &amp;lt;i&amp;gt;&amp;amp;epsilon;&amp;lt;/i&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be expanded for the three treatments and four replicates of the data in the first table as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{11}}= &amp;amp; 6=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{11}}\text{    Level 1, Replicate 1} \\ &lt;br /&gt;
{{Y}_{21}}= &amp;amp; 13=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{21}}\text{  Level 2, Replicate 1} \\ &lt;br /&gt;
{{Y}_{31}}= &amp;amp; 23=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{31}}\text{  Level 3, Replicate 1} \\ &lt;br /&gt;
{{Y}_{12}}= &amp;amp; 13=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{12}}\text{  Level 1, Replicate 2} \\ &lt;br /&gt;
{{Y}_{22}}= &amp;amp; 16=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{22}}\text{  Level 2, Replicate 2} \\ &lt;br /&gt;
{{Y}_{32}}= &amp;amp; 20=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{32}}\text{  Level 3, Replicate 2} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; ... \\ &lt;br /&gt;
{{Y}_{34}}= &amp;amp; 18=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{34}}\text{  Level 3, Replicate 4}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The corresponding matrix notation is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{11}}  \\&lt;br /&gt;
   {{Y}_{21}}  \\&lt;br /&gt;
   {{Y}_{31}}  \\&lt;br /&gt;
   {{Y}_{12}}  \\&lt;br /&gt;
   {{Y}_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{34}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y= &amp;amp; X\beta +\epsilon  \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   23  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   16  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The matrices &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are used in the calculation of the sum of squares in the next section. The data in the first table can be entered into the DOE folio as shown in the figure below.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_1.png|center|671px|Single factor experiment design for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Test in Single Factor Experiments===&lt;br /&gt;
&lt;br /&gt;
The hypothesis test in single factor experiments examines the ANOVA model to see if the response at any level of the investigated factor is significantly different from that at the other levels. If this is not the case and the response at all levels is not significantly different, then it can be concluded that the investigated factor does not affect the response. The test on the ANOVA model is carried out by checking to see if any of the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, are non-zero. The test is similar to the test of significance of regression mentioned in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] in the context of regression models. The hypotheses statements for this test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0 \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\tau }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test for &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is carried out using the following statistic:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{TR}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the ANOVA model and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. Note that in the case of ANOVA models we use the notation &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment mean square) for the model mean square and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment sum of squares) for the model sum of squares (instead of &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression mean square, and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression sum of squares, used in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This is done to indicate that the model under consideration is the ANOVA model and not the regression model. The calculations to obtain &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; are identical to the calculations to obtain &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt; explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The sum of squares to obtain the statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt; can be calculated as explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. Using the data in the first table, the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.1667 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.1667 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.1667  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 232.1667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the number of levels of the factor, &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; represents the replicates at each level, &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; represents the vector of the response values, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; represents the matrix of ones. (For details on each of these terms, refer to [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].)&lt;br /&gt;
Since two effect terms, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are used in the regression version of the ANOVA model, the degrees of freedom associated with the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, is two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be obtained as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.9167 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.9167 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.9167  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 306.6667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;I\,\!&amp;lt;/math&amp;gt; is the identity matrix. Since there are 12 data points in all, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; is 11. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{T}})=11\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 306.6667-232.1667 \\ &lt;br /&gt;
= &amp;amp; 74.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 11-2 \\ &lt;br /&gt;
= &amp;amp; 9  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic can now be calculated using the equation given in [[One_Factor_Designs#Hypothesis_Test_in_Single_Factor_Experiments|Hypothesis Test in Single Factor Experiments]] as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{0}}= &amp;amp; \frac{M{{S}_{TR}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{TR}}/dof(S{{S}_{TR}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{232.1667/2}{74.5/9} \\ &lt;br /&gt;
= &amp;amp; 14.0235  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value for the statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 9 degrees of freedom in the denominator is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{f}_{0}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.9983 \\ &lt;br /&gt;
= &amp;amp; 0.0017  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that change in the lathe speed has a significant effect on the surface finish. The Weibull++ DOE folio displays these results in the ANOVA table, as shown in the figure below. The values of S and R-sq are the standard error and the coefficient of determination for the model, respectively. These values are explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] and indicate how well the model fits the data. The values in the figure below indicate that the fit of the ANOVA model is fair.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_2.png|center|748px|ANOVA table for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the &#039;&#039;i&#039;&#039;&amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; Treatment Mean===&lt;br /&gt;
&lt;br /&gt;
The response at each treatment of a single factor experiment can be assumed to be a normal population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt; provided that the error terms can be assumed to be normally distributed. A point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is the average response at each treatment, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;. Since this is a sample average, the associated variance is &amp;lt;math&amp;gt;{{\sigma }^{2}}/{{m}_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;{{m}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of replicates at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment. Therefore, the confidence interval on &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution. Recall from [[Statistical_Background_on_DOE| Statistical Background on DOE]] (inference on population mean when variance is unknown) that: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}/{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{M{{S}_{E}}/{{m}_{i}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
has a &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with degrees of freedom &amp;lt;math&amp;gt;=dof(S{{S}_{E}})\,\!&amp;lt;/math&amp;gt;. Therefore, a 100 (&amp;lt;math&amp;gt;1-\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment mean, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, for the first treatment of the lathe speed we have:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}= &amp;amp; {{{\bar{y}}}_{1\cdot }} \\ &lt;br /&gt;
= &amp;amp; \frac{6+13+7+8}{4} \\ &lt;br /&gt;
= &amp;amp; 8.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the DOE folio, this value is displayed as the Estimated Mean for the first level, as shown in the Data Summary table in the figure below. The value displayed as the standard deviation for this level is simply the sample standard deviation calculated using the observations corresponding to this level.  The 90% confidence interval for this treatment is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{M{{S}_{E}}}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833\sqrt{\frac{(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833(1.44) \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 2.64  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}\,\!&amp;lt;/math&amp;gt; are 5.9 and 11.1, respectively. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_3.png|center|594px|Data Summary table for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the Difference in Two Treatment Means===&lt;br /&gt;
&lt;br /&gt;
The confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is used to compare two levels of the factor at a given significance. If the confidence interval does not include the value of zero, it is concluded that the two levels of the factor are significantly different. The point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt;. The variance for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})= &amp;amp; var({{{\bar{y}}}_{i\cdot }})+var({{{\bar{y}}}_{j\cdot }}) \\ &lt;br /&gt;
= &amp;amp; {{\sigma }^{2}}/{{m}_{i}}+{{\sigma }^{2}}/{{m}_{j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For balanced designs all &amp;lt;math&amp;gt;{{m}_{i}}=m\,\!&amp;lt;/math&amp;gt;. Therefore:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})=2{{\sigma }^{2}}/m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The standard deviation for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; can be obtained by taking the square root of &amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})\,\!&amp;lt;/math&amp;gt; and is referred to as the pooled standard error:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for the difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2{{{\hat{\sigma }}}^{2}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2M{{S}_{E}}/m}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then a 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, an estimate of the difference in the first and second treatment means of the lathe speed, &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}-{{{\hat{\mu }}}_{2}}= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }} \\ &lt;br /&gt;
= &amp;amp; 8.5-13.25 \\ &lt;br /&gt;
= &amp;amp; -4.75  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The pooled standard error for this difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2M{{S}_{E}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 2.0344  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To test &amp;lt;math&amp;gt;{{H}_{0}}:{{\mu }_{1}}-{{\mu }_{2}}=0\,\!&amp;lt;/math&amp;gt;, the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}-({{\mu }_{1}}-{{\mu }_{2}})}{\sqrt{2M{{S}_{E}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75-(0)}{\sqrt{\tfrac{2(74.5/9)}{4}}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75}{2.0344} \\ &lt;br /&gt;
= &amp;amp; -2.3348  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the DOE folio, the value of the statistic is displayed in the Mean Comparisons table under the column T Value as shown in the figure below. The 90% confidence interval on the difference &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833\sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833(2.0344) \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 3.729  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hence the 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-8.479\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1.021\,\!&amp;lt;/math&amp;gt;, respectively. These values are displayed under the Low CI and High CI columns in the following figure. Since the confidence interval for this pair of means does not included zero, it can be concluded that these means are significantly different at 90% confidence. This conclusion can also be arrived at using the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value noting that the hypothesis is two-sided. The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to the statistic &amp;lt;math&amp;gt;{{t}_{0}}=-2.3348\,\!&amp;lt;/math&amp;gt;, based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with 9 degrees of freedom is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 2\times (1-P(T\le |{{t}_{0}}|)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-P(T\le 2.3348)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-0.9778) \\ &lt;br /&gt;
= &amp;amp; 0.0444  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, the means are significantly different at 90% confidence. Bounds on the difference between other treatment pairs can be obtained in a similar manner and it is concluded that all treatments are significantly different.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_4.png|center|772px|Mean Comparisons table for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
&lt;br /&gt;
Plots of residuals, &amp;lt;math&amp;gt;{{e}_{ij}}\,\!&amp;lt;/math&amp;gt;, similar to the ones discussed in the previous chapters on regression, are used to ensure that the assumptions associated with the ANOVA model are not violated.  The ANOVA model assumes that the random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are normally and independently distributed with the same variance for each treatment. The normality assumption can be checked by obtaining a normal probability plot of the residuals. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equality of variance is checked by plotting residuals against the treatments and the treatment averages, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;  (also referred to as fitted values), and inspecting the spread in the residuals. If a pattern is seen in these plots, then this indicates the need to use a suitable transformation on the response that will ensure variance equality. Box-Cox transformations are discussed in the next section. To check for independence of the random error terms residuals are plotted against time or run-order to ensure that a pattern does not exist in these plots. Residual plots for the given example are shown in the following two figures. The plots show that the assumptions associated with the ANOVA model are not violated.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_5.png|center|650px|Normal probability plot of residuals for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_6.png|center|650px|Plot of residuals against fitted values for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Box-Cox Method===&lt;br /&gt;
&lt;br /&gt;
Transformations on the response may be used when residual plots for an experiment show a pattern. This indicates that the equality of variance does not hold for the residuals of the given model. The Box-Cox method can be used to automatically identify a suitable power transformation for the data based on the following relationship:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{*}}={{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is determined using the given data such that &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is minimized. The values of &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; are not used as is because of issues related to calculation or comparison of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values for different values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. For example, for &amp;lt;math&amp;gt;\lambda =0\,\!&amp;lt;/math&amp;gt; all response values will become 1. Therefore, the following relationship is used to obtain &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y^{\lambda }=\left\{ &lt;br /&gt;
\begin{array}{cc}&lt;br /&gt;
\frac{y^{\lambda }-1}{\lambda \dot{y}^{\lambda -1}} &amp;amp; \lambda \neq 0 \\ &lt;br /&gt;
\dot{y}\ln y &amp;amp; \lambda =0&lt;br /&gt;
\end{array}&lt;br /&gt;
\right.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\dot{y}=\exp \left[ \frac{1}{n}\sum\limits_{i=1}^{n}\ln \left( y_{i}\right) \right]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Once all &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; values are obtained for a value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, the corresponding &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for these values is obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;. The process is repeated for a number of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values to obtain a plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is selected as the required transformation for the given data. The DOE folio plots &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values because the range of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values is large and if this is not done, all values cannot be displayed on the same plot. The range of search for the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value in the software is from &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt;, because larger values of of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are usually not meaningful. The DOE folio also displays a recommended transformation based on the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value obtained as shown in the following table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;table border = &amp;quot;1&amp;quot; cellpadding = &amp;quot;5&amp;quot; cellspacing = &amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt; &lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;font size = &amp;quot;3&amp;quot;&amp;gt;Best Lambda&amp;lt;/font&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;font size = &amp;quot;3&amp;quot;&amp;gt;Recommended Transformation&amp;lt;/font&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;font size = &amp;quot;3&amp;quot;&amp;gt;Equation&amp;lt;/font&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;-2.5&amp;lt;\lambda \leq -1.5\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Power} \\ \lambda =-2\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\frac{1}{Y^{2}}\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;-1.5&amp;lt;\lambda \leq -0.75\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Reciprocal} \\ \lambda =-1\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\frac{1}{Y}\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;-0.75&amp;lt;\lambda \leq -0.25\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Reciprocal Square Root} \\ \lambda =-0.5\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\frac{1}{\sqrt{Y}}\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;-0.25&amp;lt;\lambda \leq 0.25\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Natural Log} \\ \lambda =0\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\ln Y\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;0.25&amp;lt;\lambda \leq 0.75\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Square Root} \\ \lambda =0.5\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\sqrt{Y}\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;0.75&amp;lt;\lambda \leq 1.5\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{None} \\ \lambda =1\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=Y\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;1.5&amp;lt;\lambda \leq 2.5\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Power} \\ \lambda =2\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=Y^{2}\,\!&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Confidence intervals on the selected &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values are also available. Let &amp;lt;math&amp;gt;S{{S}_{E}}(\lambda )\,\!&amp;lt;/math&amp;gt; be the value of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; corresponding to the selected value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then, to calculate the 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence intervals on &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, we need to calculate &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}^{*}}=S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The required limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are the two values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the value &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; (on the plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;). If the limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; do not include the value of one, then the transformation is applicable for the given data.&lt;br /&gt;
Note that the power transformations are not defined for response values that are negative or zero. The DOE folio deals with negative and zero response values using the following equations (that involve addition of a suitable quantity to all of the response values if a zero or negative response value is encountered). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{rll}&lt;br /&gt;
y\left( i\right) = &amp;amp; y\left( i\right) +\left\vert y_{\min }\right\vert&lt;br /&gt;
\times 1.1 &amp;amp; \text{Negative Response} \\ &lt;br /&gt;
y\left( i\right) = &amp;amp; y\left( i\right) +1 &amp;amp; \text{Zero Response}&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;math&amp;gt;{{y}_{\min }}\,\!&amp;lt;/math&amp;gt; represents the minimum response value and &amp;lt;math&amp;gt;\left| {{y}_{\min }} \right|\,\!&amp;lt;/math&amp;gt; represents the absolute value of the minimum response. &lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
To illustrate the Box-Cox method, consider the experiment given in the first table. Transformed response values for various values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be calculated using the equation for &amp;lt;math&amp;gt;{Y}^{\lambda}\,\!&amp;lt;/math&amp;gt; given in [[One_Factor_Designs#Box-Cox_Method|Box-Cox Method]]. Knowing the hat matrix, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values corresponding to each of these &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values can easily be obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values calculated for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values between &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt; for the given data are shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \lambda  &amp;amp; {} &amp;amp; S{{S}_{E}} &amp;amp; \ln S{{S}_{E}}  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   -5 &amp;amp; {} &amp;amp; 5947.8 &amp;amp; 8.6908  \\&lt;br /&gt;
   -4 &amp;amp; {} &amp;amp; 1946.4 &amp;amp; 7.5737  \\&lt;br /&gt;
   -3 &amp;amp; {} &amp;amp; 696.5 &amp;amp; 6.5461  \\&lt;br /&gt;
   -2 &amp;amp; {} &amp;amp; 282.2 &amp;amp; 5.6425  \\&lt;br /&gt;
   -1 &amp;amp; {} &amp;amp; 135.8 &amp;amp; 4.9114  \\&lt;br /&gt;
   0 &amp;amp; {} &amp;amp; 83.9 &amp;amp; 4.4299  \\&lt;br /&gt;
   1 &amp;amp; {} &amp;amp; 74.5 &amp;amp; 4.3108  \\&lt;br /&gt;
   2 &amp;amp; {} &amp;amp; 101.0 &amp;amp; 4.6154  \\&lt;br /&gt;
   3 &amp;amp; {} &amp;amp; 190.4 &amp;amp; 5.2491  \\&lt;br /&gt;
   4 &amp;amp; {} &amp;amp; 429.5 &amp;amp; 6.0627  \\&lt;br /&gt;
   5 &amp;amp; {} &amp;amp; 1057.6 &amp;amp; -6.9638  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A plot of &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for various &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values, as obtained from the DOE folio, is shown in the following figure. The value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; that gives the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is identified as 0.7841. The &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; value corresponding to this value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is 73.74. A 90% confidence interval on this &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value is calculated as follows. &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; can be obtained as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}^{*}}= &amp;amp; S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{t_{0.05,9}^{2}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{{{1.833}^{2}}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 101.27  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\ln S{{S}^{*}}=4.6178\,\!&amp;lt;/math&amp;gt;. The &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values corresponding to this value from the following figure are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0054\,\!&amp;lt;/math&amp;gt;. Therefore, the 90% confidence limits on are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0054\,\!&amp;lt;/math&amp;gt;. Since the confidence limits include the value of 1, this indicates that a transformation is not required for the data in the first table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:doe6_7.png|center|650px|Box-Cox power transformation plot for the data in the first table.|link=]]&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=One_Factor_Designs&amp;diff=65268</id>
		<title>One Factor Designs</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=One_Factor_Designs&amp;diff=65268"/>
		<updated>2017-08-28T19:12:43Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Box-Cox Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Doebook|5}}&lt;br /&gt;
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], the analysis of observational studies involves the use of regression models. The analysis of experimental studies involves the use of analysis of variance (ANOVA) models. For a comparison of the two models see [[One_Factor_Designs#Fitting_ANOVA_Models|Fitting ANOVA Models]]. In single factor experiments, ANOVA models are used to compare the mean response values at different levels of the factor. Each level of the factor is investigated to see if the response is significantly different from the response at other levels of the factor.  The analysis of single factor experiments is often referred to as &#039;&#039;one-way ANOVA&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
To illustrate the use of ANOVA models in the analysis of experiments, consider a single factor experiment where the analyst wants to see if the surface finish of certain parts is affected by the speed of a lathe machine. Data is collected for three speeds (or three treatments). Each treatment is replicated four times. Therefore, this experiment design is balanced. Surface finish values recorded using randomization are shown in the following table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.1.png|center|410px|Surface finish values for three speeds of a lathe machine.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be stated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}={{\mu }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model assumes that the response at each factor level, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, is the sum of the mean response at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, and a random error term, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. The subscript &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; denotes the factor level while the subscript &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; denotes the replicate. If there are &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of the factor and &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates at each level then &amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j=1,2,...,m\,\!&amp;lt;/math&amp;gt;. The random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are assumed to be normally and independently distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. Therefore, the response at each level can be thought of as a normally distributed population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and constant variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. The equation given above is referred to as the &#039;&#039;means model&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The ANOVA model of the means model can also be written using &amp;lt;math&amp;gt;{{\mu }_{i}}=\mu +{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the effect due to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Such an ANOVA model is called the &#039;&#039;effects model&#039;&#039;.  In the effects models the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, represent the deviations from the overall mean, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;. Therefore, the following constraint exists on the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;s: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Fitting ANOVA Models===&lt;br /&gt;
&lt;br /&gt;
To fit ANOVA models and carry out hypothesis testing in single factor experiments, it is convenient to express the effects model of the effects model in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt; (that was used for multiple linear regression models in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This can be done as shown next. Using the effects model, the ANOVA model for the single factor experiment in the first table can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment effect. There are three treatments in the first table (500, 600 and 700). Therefore, there are three treatment effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{3}}\,\!&amp;lt;/math&amp;gt;. The following constraint exists for these effects:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   } {{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the first treatment, the ANOVA model for the single factor experiment in the above table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{1j}}=\mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+0\cdot {{\tau }_{3}}+{{\epsilon }_{1j}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;, the model for the first treatment is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}-0\cdot ({{\tau }_{1}}+{{\tau }_{2}})+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{or   }{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Models for the second and third treatments can be obtained in a similar way. The models for the three treatments are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{First Treatment}: &amp;amp; {{Y}_{1j}}=1\cdot \mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{Second Treatment}: &amp;amp; {{Y}_{2j}}=1\cdot \mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{2j}} \\ &lt;br /&gt;
\text{Third Treatment}: &amp;amp; {{Y}_{3j}}=1\cdot \mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{3j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coefficients of the treatment effects &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; can be expressed using two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the indicator variables &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, the ANOVA model for the data in the first table now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{x}_{1}}\cdot {{\tau }_{1}}+{{x}_{2}}\cdot {{\tau }_{2}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation can be rewritten by including subscripts &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; (for the level of the factor) and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; (for the replicate number) as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{x}_{i1}}\cdot {{\tau }_{1}}+{{x}_{i2}}\cdot {{\tau }_{2}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation given above represents the &amp;quot;regression version&amp;quot; of the ANOVA model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Treat Numerical Factors as Qualitative or Quantitative?===&lt;br /&gt;
&lt;br /&gt;
It can be seen from the equation given above that in an ANOVA model each factor is treated as a qualitative factor. In the present example the factor, lathe speed, is a quantitative factor with three levels. But the ANOVA model treats this factor as a qualitative factor with three levels. Therefore, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required to represent this factor. &lt;br /&gt;
&lt;br /&gt;
Note that in a regression model a variable can either be treated as a quantitative or a qualitative variable. The factor, lathe speed, would be used as a quantitative factor and represented with a single predictor variable in a regression model. For example, if a first order model were to be fitted to the data in the first table, then the regression model would take the form &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. If a second order regression model were to be fitted, the regression model would be &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}x_{i1}^{2}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. Notice that unlike these regression models, the regression version of the ANOVA model does not make any assumption about the nature of relationship between the response and the factor being investigated.&lt;br /&gt;
&lt;br /&gt;
The choice of treating a particular factor as a quantitative or qualitative variable depends on the objective of the experimenter. In the case of the data of the first table, the objective of the experimenter is to compare the levels of the factor to see if change in the levels leads to a significant change in the response.  The objective is not to make predictions on the response for a given level of the factor. Therefore, the factor is treated as a qualitative factor in this case. If the objective of the experimenter were prediction or optimization, the experimenter would focus on aspects such as the nature of relationship between the factor, lathe speed, and the response, surface finish, so that the factor should be modeled as a quantitative factor to make accurate predictions.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;i&amp;gt;Y&amp;lt;/i&amp;gt; = &amp;lt;i&amp;gt;X&amp;amp;Beta;&amp;lt;/i&amp;gt; + &amp;lt;i&amp;gt;&amp;amp;epsilon;&amp;lt;/i&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be expanded for the three treatments and four replicates of the data in the first table as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{11}}= &amp;amp; 6=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{11}}\text{    Level 1, Replicate 1} \\ &lt;br /&gt;
{{Y}_{21}}= &amp;amp; 13=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{21}}\text{  Level 2, Replicate 1} \\ &lt;br /&gt;
{{Y}_{31}}= &amp;amp; 23=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{31}}\text{  Level 3, Replicate 1} \\ &lt;br /&gt;
{{Y}_{12}}= &amp;amp; 13=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{12}}\text{  Level 1, Replicate 2} \\ &lt;br /&gt;
{{Y}_{22}}= &amp;amp; 16=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{22}}\text{  Level 2, Replicate 2} \\ &lt;br /&gt;
{{Y}_{32}}= &amp;amp; 20=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{32}}\text{  Level 3, Replicate 2} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; ... \\ &lt;br /&gt;
{{Y}_{34}}= &amp;amp; 18=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{34}}\text{  Level 3, Replicate 4}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The corresponding matrix notation is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{11}}  \\&lt;br /&gt;
   {{Y}_{21}}  \\&lt;br /&gt;
   {{Y}_{31}}  \\&lt;br /&gt;
   {{Y}_{12}}  \\&lt;br /&gt;
   {{Y}_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{34}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y= &amp;amp; X\beta +\epsilon  \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   23  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   16  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The matrices &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are used in the calculation of the sum of squares in the next section. The data in the first table can be entered into the DOE folio as shown in the figure below.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_1.png|center|671px|Single factor experiment design for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Test in Single Factor Experiments===&lt;br /&gt;
&lt;br /&gt;
The hypothesis test in single factor experiments examines the ANOVA model to see if the response at any level of the investigated factor is significantly different from that at the other levels. If this is not the case and the response at all levels is not significantly different, then it can be concluded that the investigated factor does not affect the response. The test on the ANOVA model is carried out by checking to see if any of the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, are non-zero. The test is similar to the test of significance of regression mentioned in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] in the context of regression models. The hypotheses statements for this test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0 \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\tau }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test for &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is carried out using the following statistic:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{TR}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the ANOVA model and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. Note that in the case of ANOVA models we use the notation &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment mean square) for the model mean square and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment sum of squares) for the model sum of squares (instead of &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression mean square, and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression sum of squares, used in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This is done to indicate that the model under consideration is the ANOVA model and not the regression model. The calculations to obtain &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; are identical to the calculations to obtain &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt; explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The sum of squares to obtain the statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt; can be calculated as explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. Using the data in the first table, the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.1667 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.1667 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.1667  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 232.1667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the number of levels of the factor, &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; represents the replicates at each level, &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; represents the vector of the response values, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; represents the matrix of ones. (For details on each of these terms, refer to [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].)&lt;br /&gt;
Since two effect terms, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are used in the regression version of the ANOVA model, the degrees of freedom associated with the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, is two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be obtained as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.9167 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.9167 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.9167  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 306.6667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;I\,\!&amp;lt;/math&amp;gt; is the identity matrix. Since there are 12 data points in all, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; is 11. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{T}})=11\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 306.6667-232.1667 \\ &lt;br /&gt;
= &amp;amp; 74.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 11-2 \\ &lt;br /&gt;
= &amp;amp; 9  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic can now be calculated using the equation given in [[One_Factor_Designs#Hypothesis_Test_in_Single_Factor_Experiments|Hypothesis Test in Single Factor Experiments]] as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{0}}= &amp;amp; \frac{M{{S}_{TR}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{TR}}/dof(S{{S}_{TR}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{232.1667/2}{74.5/9} \\ &lt;br /&gt;
= &amp;amp; 14.0235  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value for the statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 9 degrees of freedom in the denominator is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{f}_{0}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.9983 \\ &lt;br /&gt;
= &amp;amp; 0.0017  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that change in the lathe speed has a significant effect on the surface finish. The Weibull++ DOE folio displays these results in the ANOVA table, as shown in the figure below. The values of S and R-sq are the standard error and the coefficient of determination for the model, respectively. These values are explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] and indicate how well the model fits the data. The values in the figure below indicate that the fit of the ANOVA model is fair.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_2.png|center|748px|ANOVA table for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the &#039;&#039;i&#039;&#039;&amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; Treatment Mean===&lt;br /&gt;
&lt;br /&gt;
The response at each treatment of a single factor experiment can be assumed to be a normal population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt; provided that the error terms can be assumed to be normally distributed. A point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is the average response at each treatment, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;. Since this is a sample average, the associated variance is &amp;lt;math&amp;gt;{{\sigma }^{2}}/{{m}_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;{{m}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of replicates at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment. Therefore, the confidence interval on &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution. Recall from [[Statistical_Background_on_DOE| Statistical Background on DOE]] (inference on population mean when variance is unknown) that: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}/{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{M{{S}_{E}}/{{m}_{i}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
has a &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with degrees of freedom &amp;lt;math&amp;gt;=dof(S{{S}_{E}})\,\!&amp;lt;/math&amp;gt;. Therefore, a 100 (&amp;lt;math&amp;gt;1-\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment mean, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, for the first treatment of the lathe speed we have:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}= &amp;amp; {{{\bar{y}}}_{1\cdot }} \\ &lt;br /&gt;
= &amp;amp; \frac{6+13+7+8}{4} \\ &lt;br /&gt;
= &amp;amp; 8.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the DOE folio, this value is displayed as the Estimated Mean for the first level, as shown in the Data Summary table in the figure below. The value displayed as the standard deviation for this level is simply the sample standard deviation calculated using the observations corresponding to this level.  The 90% confidence interval for this treatment is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{M{{S}_{E}}}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833\sqrt{\frac{(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833(1.44) \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 2.64  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}\,\!&amp;lt;/math&amp;gt; are 5.9 and 11.1, respectively. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_3.png|center|594px|Data Summary table for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the Difference in Two Treatment Means===&lt;br /&gt;
&lt;br /&gt;
The confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is used to compare two levels of the factor at a given significance. If the confidence interval does not include the value of zero, it is concluded that the two levels of the factor are significantly different. The point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt;. The variance for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})= &amp;amp; var({{{\bar{y}}}_{i\cdot }})+var({{{\bar{y}}}_{j\cdot }}) \\ &lt;br /&gt;
= &amp;amp; {{\sigma }^{2}}/{{m}_{i}}+{{\sigma }^{2}}/{{m}_{j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For balanced designs all &amp;lt;math&amp;gt;{{m}_{i}}=m\,\!&amp;lt;/math&amp;gt;. Therefore:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})=2{{\sigma }^{2}}/m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The standard deviation for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; can be obtained by taking the square root of &amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})\,\!&amp;lt;/math&amp;gt; and is referred to as the pooled standard error:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for the difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2{{{\hat{\sigma }}}^{2}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2M{{S}_{E}}/m}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then a 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, an estimate of the difference in the first and second treatment means of the lathe speed, &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}-{{{\hat{\mu }}}_{2}}= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }} \\ &lt;br /&gt;
= &amp;amp; 8.5-13.25 \\ &lt;br /&gt;
= &amp;amp; -4.75  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The pooled standard error for this difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2M{{S}_{E}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 2.0344  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To test &amp;lt;math&amp;gt;{{H}_{0}}:{{\mu }_{1}}-{{\mu }_{2}}=0\,\!&amp;lt;/math&amp;gt;, the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}-({{\mu }_{1}}-{{\mu }_{2}})}{\sqrt{2M{{S}_{E}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75-(0)}{\sqrt{\tfrac{2(74.5/9)}{4}}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75}{2.0344} \\ &lt;br /&gt;
= &amp;amp; -2.3348  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the DOE folio, the value of the statistic is displayed in the Mean Comparisons table under the column T Value as shown in the figure below. The 90% confidence interval on the difference &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833\sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833(2.0344) \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 3.729  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hence the 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-8.479\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1.021\,\!&amp;lt;/math&amp;gt;, respectively. These values are displayed under the Low CI and High CI columns in the following figure. Since the confidence interval for this pair of means does not included zero, it can be concluded that these means are significantly different at 90% confidence. This conclusion can also be arrived at using the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value noting that the hypothesis is two-sided. The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to the statistic &amp;lt;math&amp;gt;{{t}_{0}}=-2.3348\,\!&amp;lt;/math&amp;gt;, based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with 9 degrees of freedom is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 2\times (1-P(T\le |{{t}_{0}}|)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-P(T\le 2.3348)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-0.9778) \\ &lt;br /&gt;
= &amp;amp; 0.0444  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, the means are significantly different at 90% confidence. Bounds on the difference between other treatment pairs can be obtained in a similar manner and it is concluded that all treatments are significantly different.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_4.png|center|772px|Mean Comparisons table for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
&lt;br /&gt;
Plots of residuals, &amp;lt;math&amp;gt;{{e}_{ij}}\,\!&amp;lt;/math&amp;gt;, similar to the ones discussed in the previous chapters on regression, are used to ensure that the assumptions associated with the ANOVA model are not violated.  The ANOVA model assumes that the random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are normally and independently distributed with the same variance for each treatment. The normality assumption can be checked by obtaining a normal probability plot of the residuals. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equality of variance is checked by plotting residuals against the treatments and the treatment averages, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;  (also referred to as fitted values), and inspecting the spread in the residuals. If a pattern is seen in these plots, then this indicates the need to use a suitable transformation on the response that will ensure variance equality. Box-Cox transformations are discussed in the next section. To check for independence of the random error terms residuals are plotted against time or run-order to ensure that a pattern does not exist in these plots. Residual plots for the given example are shown in the following two figures. The plots show that the assumptions associated with the ANOVA model are not violated.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_5.png|center|650px|Normal probability plot of residuals for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_6.png|center|650px|Plot of residuals against fitted values for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Box-Cox Method===&lt;br /&gt;
&lt;br /&gt;
Transformations on the response may be used when residual plots for an experiment show a pattern. This indicates that the equality of variance does not hold for the residuals of the given model. The Box-Cox method can be used to automatically identify a suitable power transformation for the data based on the following relationship:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{*}}={{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is determined using the given data such that &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is minimized. The values of &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; are not used as is because of issues related to calculation or comparison of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values for different values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. For example, for &amp;lt;math&amp;gt;\lambda =0\,\!&amp;lt;/math&amp;gt; all response values will become 1. Therefore, the following relationship is used to obtain &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y^{\lambda }=\left\{ &lt;br /&gt;
\begin{array}{cc}&lt;br /&gt;
\frac{y^{\lambda }-1}{\lambda \dot{y}^{\lambda -1}} &amp;amp; \lambda \neq 0 \\ &lt;br /&gt;
\dot{y}\ln y &amp;amp; \lambda =0&lt;br /&gt;
\end{array}&lt;br /&gt;
\right.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\dot{y}=\exp \left[ \frac{1}{n}\sum\limits_{i=1}^{n}\ln \left( y_{i}\right) \right]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Once all &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; values are obtained for a value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, the corresponding &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for these values is obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;. The process is repeated for a number of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values to obtain a plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is selected as the required transformation for the given data. The DOE folio plots &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values because the range of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values is large and if this is not done, all values cannot be displayed on the same plot. The range of search for the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value in the software is from &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt;, because larger values of of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are usually not meaningful. The DOE folio also displays a recommended transformation based on the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value obtained as per the second table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.2.png|center|433px|Recommended Box-Cox power transformations.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Confidence intervals on the selected &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values are also available. Let &amp;lt;math&amp;gt;S{{S}_{E}}(\lambda )\,\!&amp;lt;/math&amp;gt; be the value of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; corresponding to the selected value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then, to calculate the 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence intervals on &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, we need to calculate &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}^{*}}=S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The required limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are the two values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the value &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; (on the plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;). If the limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; do not include the value of one, then the transformation is applicable for the given data.&lt;br /&gt;
Note that the power transformations are not defined for response values that are negative or zero. The DOE folio deals with negative and zero response values using the following equations (that involve addition of a suitable quantity to all of the response values if a zero or negative response value is encountered). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{rll}&lt;br /&gt;
y\left( i\right) = &amp;amp; y\left( i\right) +\left\vert y_{\min }\right\vert&lt;br /&gt;
\times 1.1 &amp;amp; \text{Negative Response} \\ &lt;br /&gt;
y\left( i\right) = &amp;amp; y\left( i\right) +1 &amp;amp; \text{Zero Response}&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;math&amp;gt;{{y}_{\min }}\,\!&amp;lt;/math&amp;gt; represents the minimum response value and &amp;lt;math&amp;gt;\left| {{y}_{\min }} \right|\,\!&amp;lt;/math&amp;gt; represents the absolute value of the minimum response. &lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
To illustrate the Box-Cox method, consider the experiment given in the first table. Transformed response values for various values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be calculated using the equation for &amp;lt;math&amp;gt;{Y}^{\lambda}\,\!&amp;lt;/math&amp;gt; given in [[One_Factor_Designs#Box-Cox_Method|Box-Cox Method]]. Knowing the hat matrix, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values corresponding to each of these &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values can easily be obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values calculated for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values between &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt; for the given data are shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \lambda  &amp;amp; {} &amp;amp; S{{S}_{E}} &amp;amp; \ln S{{S}_{E}}  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   -5 &amp;amp; {} &amp;amp; 5947.8 &amp;amp; 8.6908  \\&lt;br /&gt;
   -4 &amp;amp; {} &amp;amp; 1946.4 &amp;amp; 7.5737  \\&lt;br /&gt;
   -3 &amp;amp; {} &amp;amp; 696.5 &amp;amp; 6.5461  \\&lt;br /&gt;
   -2 &amp;amp; {} &amp;amp; 282.2 &amp;amp; 5.6425  \\&lt;br /&gt;
   -1 &amp;amp; {} &amp;amp; 135.8 &amp;amp; 4.9114  \\&lt;br /&gt;
   0 &amp;amp; {} &amp;amp; 83.9 &amp;amp; 4.4299  \\&lt;br /&gt;
   1 &amp;amp; {} &amp;amp; 74.5 &amp;amp; 4.3108  \\&lt;br /&gt;
   2 &amp;amp; {} &amp;amp; 101.0 &amp;amp; 4.6154  \\&lt;br /&gt;
   3 &amp;amp; {} &amp;amp; 190.4 &amp;amp; 5.2491  \\&lt;br /&gt;
   4 &amp;amp; {} &amp;amp; 429.5 &amp;amp; 6.0627  \\&lt;br /&gt;
   5 &amp;amp; {} &amp;amp; 1057.6 &amp;amp; -6.9638  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A plot of &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for various &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values, as obtained from the DOE folio, is shown in the following figure. The value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; that gives the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is identified as 0.7841. The &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; value corresponding to this value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is 73.74. A 90% confidence interval on this &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value is calculated as follows. &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; can be obtained as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}^{*}}= &amp;amp; S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{t_{0.05,9}^{2}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{{{1.833}^{2}}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 101.27  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\ln S{{S}^{*}}=4.6178\,\!&amp;lt;/math&amp;gt;. The &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values corresponding to this value from the following figure are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0054\,\!&amp;lt;/math&amp;gt;. Therefore, the 90% confidence limits on are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0054\,\!&amp;lt;/math&amp;gt;. Since the confidence limits include the value of 1, this indicates that a transformation is not required for the data in the first table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:doe6_7.png|center|650px|Box-Cox power transformation plot for the data in the first table.|link=]]&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=One_Factor_Designs&amp;diff=65267</id>
		<title>One Factor Designs</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=One_Factor_Designs&amp;diff=65267"/>
		<updated>2017-08-28T19:12:23Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Box-Cox Method */ fixed alignment in equation for zero and negative responses&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Doebook|5}}&lt;br /&gt;
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], the analysis of observational studies involves the use of regression models. The analysis of experimental studies involves the use of analysis of variance (ANOVA) models. For a comparison of the two models see [[One_Factor_Designs#Fitting_ANOVA_Models|Fitting ANOVA Models]]. In single factor experiments, ANOVA models are used to compare the mean response values at different levels of the factor. Each level of the factor is investigated to see if the response is significantly different from the response at other levels of the factor.  The analysis of single factor experiments is often referred to as &#039;&#039;one-way ANOVA&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
To illustrate the use of ANOVA models in the analysis of experiments, consider a single factor experiment where the analyst wants to see if the surface finish of certain parts is affected by the speed of a lathe machine. Data is collected for three speeds (or three treatments). Each treatment is replicated four times. Therefore, this experiment design is balanced. Surface finish values recorded using randomization are shown in the following table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.1.png|center|410px|Surface finish values for three speeds of a lathe machine.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be stated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}={{\mu }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model assumes that the response at each factor level, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, is the sum of the mean response at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, and a random error term, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. The subscript &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; denotes the factor level while the subscript &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; denotes the replicate. If there are &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of the factor and &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates at each level then &amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j=1,2,...,m\,\!&amp;lt;/math&amp;gt;. The random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are assumed to be normally and independently distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. Therefore, the response at each level can be thought of as a normally distributed population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and constant variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. The equation given above is referred to as the &#039;&#039;means model&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The ANOVA model of the means model can also be written using &amp;lt;math&amp;gt;{{\mu }_{i}}=\mu +{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the effect due to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Such an ANOVA model is called the &#039;&#039;effects model&#039;&#039;.  In the effects models the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, represent the deviations from the overall mean, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;. Therefore, the following constraint exists on the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;s: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Fitting ANOVA Models===&lt;br /&gt;
&lt;br /&gt;
To fit ANOVA models and carry out hypothesis testing in single factor experiments, it is convenient to express the effects model of the effects model in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt; (that was used for multiple linear regression models in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This can be done as shown next. Using the effects model, the ANOVA model for the single factor experiment in the first table can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment effect. There are three treatments in the first table (500, 600 and 700). Therefore, there are three treatment effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{3}}\,\!&amp;lt;/math&amp;gt;. The following constraint exists for these effects:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   } {{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the first treatment, the ANOVA model for the single factor experiment in the above table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{1j}}=\mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+0\cdot {{\tau }_{3}}+{{\epsilon }_{1j}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;, the model for the first treatment is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}-0\cdot ({{\tau }_{1}}+{{\tau }_{2}})+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{or   }{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Models for the second and third treatments can be obtained in a similar way. The models for the three treatments are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{First Treatment}: &amp;amp; {{Y}_{1j}}=1\cdot \mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{Second Treatment}: &amp;amp; {{Y}_{2j}}=1\cdot \mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{2j}} \\ &lt;br /&gt;
\text{Third Treatment}: &amp;amp; {{Y}_{3j}}=1\cdot \mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{3j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coefficients of the treatment effects &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; can be expressed using two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the indicator variables &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, the ANOVA model for the data in the first table now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{x}_{1}}\cdot {{\tau }_{1}}+{{x}_{2}}\cdot {{\tau }_{2}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation can be rewritten by including subscripts &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; (for the level of the factor) and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; (for the replicate number) as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{x}_{i1}}\cdot {{\tau }_{1}}+{{x}_{i2}}\cdot {{\tau }_{2}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation given above represents the &amp;quot;regression version&amp;quot; of the ANOVA model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Treat Numerical Factors as Qualitative or Quantitative?===&lt;br /&gt;
&lt;br /&gt;
It can be seen from the equation given above that in an ANOVA model each factor is treated as a qualitative factor. In the present example the factor, lathe speed, is a quantitative factor with three levels. But the ANOVA model treats this factor as a qualitative factor with three levels. Therefore, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required to represent this factor. &lt;br /&gt;
&lt;br /&gt;
Note that in a regression model a variable can either be treated as a quantitative or a qualitative variable. The factor, lathe speed, would be used as a quantitative factor and represented with a single predictor variable in a regression model. For example, if a first order model were to be fitted to the data in the first table, then the regression model would take the form &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. If a second order regression model were to be fitted, the regression model would be &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}x_{i1}^{2}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. Notice that unlike these regression models, the regression version of the ANOVA model does not make any assumption about the nature of relationship between the response and the factor being investigated.&lt;br /&gt;
&lt;br /&gt;
The choice of treating a particular factor as a quantitative or qualitative variable depends on the objective of the experimenter. In the case of the data of the first table, the objective of the experimenter is to compare the levels of the factor to see if change in the levels leads to a significant change in the response.  The objective is not to make predictions on the response for a given level of the factor. Therefore, the factor is treated as a qualitative factor in this case. If the objective of the experimenter were prediction or optimization, the experimenter would focus on aspects such as the nature of relationship between the factor, lathe speed, and the response, surface finish, so that the factor should be modeled as a quantitative factor to make accurate predictions.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;i&amp;gt;Y&amp;lt;/i&amp;gt; = &amp;lt;i&amp;gt;X&amp;amp;Beta;&amp;lt;/i&amp;gt; + &amp;lt;i&amp;gt;&amp;amp;epsilon;&amp;lt;/i&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be expanded for the three treatments and four replicates of the data in the first table as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{11}}= &amp;amp; 6=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{11}}\text{    Level 1, Replicate 1} \\ &lt;br /&gt;
{{Y}_{21}}= &amp;amp; 13=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{21}}\text{  Level 2, Replicate 1} \\ &lt;br /&gt;
{{Y}_{31}}= &amp;amp; 23=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{31}}\text{  Level 3, Replicate 1} \\ &lt;br /&gt;
{{Y}_{12}}= &amp;amp; 13=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{12}}\text{  Level 1, Replicate 2} \\ &lt;br /&gt;
{{Y}_{22}}= &amp;amp; 16=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{22}}\text{  Level 2, Replicate 2} \\ &lt;br /&gt;
{{Y}_{32}}= &amp;amp; 20=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{32}}\text{  Level 3, Replicate 2} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; ... \\ &lt;br /&gt;
{{Y}_{34}}= &amp;amp; 18=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{34}}\text{  Level 3, Replicate 4}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The corresponding matrix notation is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{11}}  \\&lt;br /&gt;
   {{Y}_{21}}  \\&lt;br /&gt;
   {{Y}_{31}}  \\&lt;br /&gt;
   {{Y}_{12}}  \\&lt;br /&gt;
   {{Y}_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{34}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y= &amp;amp; X\beta +\epsilon  \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   23  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   16  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The matrices &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are used in the calculation of the sum of squares in the next section. The data in the first table can be entered into the DOE folio as shown in the figure below.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_1.png|center|671px|Single factor experiment design for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Test in Single Factor Experiments===&lt;br /&gt;
&lt;br /&gt;
The hypothesis test in single factor experiments examines the ANOVA model to see if the response at any level of the investigated factor is significantly different from that at the other levels. If this is not the case and the response at all levels is not significantly different, then it can be concluded that the investigated factor does not affect the response. The test on the ANOVA model is carried out by checking to see if any of the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, are non-zero. The test is similar to the test of significance of regression mentioned in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] in the context of regression models. The hypotheses statements for this test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0 \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\tau }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test for &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is carried out using the following statistic:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{TR}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the ANOVA model and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. Note that in the case of ANOVA models we use the notation &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment mean square) for the model mean square and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment sum of squares) for the model sum of squares (instead of &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression mean square, and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression sum of squares, used in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This is done to indicate that the model under consideration is the ANOVA model and not the regression model. The calculations to obtain &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; are identical to the calculations to obtain &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt; explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The sum of squares to obtain the statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt; can be calculated as explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. Using the data in the first table, the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.1667 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.1667 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.1667  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 232.1667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the number of levels of the factor, &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; represents the replicates at each level, &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; represents the vector of the response values, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; represents the matrix of ones. (For details on each of these terms, refer to [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].)&lt;br /&gt;
Since two effect terms, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are used in the regression version of the ANOVA model, the degrees of freedom associated with the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, is two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be obtained as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.9167 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.9167 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.9167  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 306.6667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;I\,\!&amp;lt;/math&amp;gt; is the identity matrix. Since there are 12 data points in all, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; is 11. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{T}})=11\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 306.6667-232.1667 \\ &lt;br /&gt;
= &amp;amp; 74.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 11-2 \\ &lt;br /&gt;
= &amp;amp; 9  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic can now be calculated using the equation given in [[One_Factor_Designs#Hypothesis_Test_in_Single_Factor_Experiments|Hypothesis Test in Single Factor Experiments]] as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{0}}= &amp;amp; \frac{M{{S}_{TR}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{TR}}/dof(S{{S}_{TR}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{232.1667/2}{74.5/9} \\ &lt;br /&gt;
= &amp;amp; 14.0235  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value for the statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 9 degrees of freedom in the denominator is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{f}_{0}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.9983 \\ &lt;br /&gt;
= &amp;amp; 0.0017  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that change in the lathe speed has a significant effect on the surface finish. The Weibull++ DOE folio displays these results in the ANOVA table, as shown in the figure below. The values of S and R-sq are the standard error and the coefficient of determination for the model, respectively. These values are explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] and indicate how well the model fits the data. The values in the figure below indicate that the fit of the ANOVA model is fair.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_2.png|center|748px|ANOVA table for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the &#039;&#039;i&#039;&#039;&amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; Treatment Mean===&lt;br /&gt;
&lt;br /&gt;
The response at each treatment of a single factor experiment can be assumed to be a normal population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt; provided that the error terms can be assumed to be normally distributed. A point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is the average response at each treatment, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;. Since this is a sample average, the associated variance is &amp;lt;math&amp;gt;{{\sigma }^{2}}/{{m}_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;{{m}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of replicates at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment. Therefore, the confidence interval on &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution. Recall from [[Statistical_Background_on_DOE| Statistical Background on DOE]] (inference on population mean when variance is unknown) that: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}/{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{M{{S}_{E}}/{{m}_{i}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
has a &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with degrees of freedom &amp;lt;math&amp;gt;=dof(S{{S}_{E}})\,\!&amp;lt;/math&amp;gt;. Therefore, a 100 (&amp;lt;math&amp;gt;1-\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment mean, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, for the first treatment of the lathe speed we have:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}= &amp;amp; {{{\bar{y}}}_{1\cdot }} \\ &lt;br /&gt;
= &amp;amp; \frac{6+13+7+8}{4} \\ &lt;br /&gt;
= &amp;amp; 8.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the DOE folio, this value is displayed as the Estimated Mean for the first level, as shown in the Data Summary table in the figure below. The value displayed as the standard deviation for this level is simply the sample standard deviation calculated using the observations corresponding to this level.  The 90% confidence interval for this treatment is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{M{{S}_{E}}}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833\sqrt{\frac{(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833(1.44) \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 2.64  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}\,\!&amp;lt;/math&amp;gt; are 5.9 and 11.1, respectively. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_3.png|center|594px|Data Summary table for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the Difference in Two Treatment Means===&lt;br /&gt;
&lt;br /&gt;
The confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is used to compare two levels of the factor at a given significance. If the confidence interval does not include the value of zero, it is concluded that the two levels of the factor are significantly different. The point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt;. The variance for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})= &amp;amp; var({{{\bar{y}}}_{i\cdot }})+var({{{\bar{y}}}_{j\cdot }}) \\ &lt;br /&gt;
= &amp;amp; {{\sigma }^{2}}/{{m}_{i}}+{{\sigma }^{2}}/{{m}_{j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For balanced designs all &amp;lt;math&amp;gt;{{m}_{i}}=m\,\!&amp;lt;/math&amp;gt;. Therefore:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})=2{{\sigma }^{2}}/m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The standard deviation for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; can be obtained by taking the square root of &amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})\,\!&amp;lt;/math&amp;gt; and is referred to as the pooled standard error:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for the difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2{{{\hat{\sigma }}}^{2}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2M{{S}_{E}}/m}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then a 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, an estimate of the difference in the first and second treatment means of the lathe speed, &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}-{{{\hat{\mu }}}_{2}}= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }} \\ &lt;br /&gt;
= &amp;amp; 8.5-13.25 \\ &lt;br /&gt;
= &amp;amp; -4.75  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The pooled standard error for this difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2M{{S}_{E}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 2.0344  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To test &amp;lt;math&amp;gt;{{H}_{0}}:{{\mu }_{1}}-{{\mu }_{2}}=0\,\!&amp;lt;/math&amp;gt;, the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}-({{\mu }_{1}}-{{\mu }_{2}})}{\sqrt{2M{{S}_{E}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75-(0)}{\sqrt{\tfrac{2(74.5/9)}{4}}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75}{2.0344} \\ &lt;br /&gt;
= &amp;amp; -2.3348  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the DOE folio, the value of the statistic is displayed in the Mean Comparisons table under the column T Value as shown in the figure below. The 90% confidence interval on the difference &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833\sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833(2.0344) \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 3.729  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hence the 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-8.479\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1.021\,\!&amp;lt;/math&amp;gt;, respectively. These values are displayed under the Low CI and High CI columns in the following figure. Since the confidence interval for this pair of means does not included zero, it can be concluded that these means are significantly different at 90% confidence. This conclusion can also be arrived at using the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value noting that the hypothesis is two-sided. The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to the statistic &amp;lt;math&amp;gt;{{t}_{0}}=-2.3348\,\!&amp;lt;/math&amp;gt;, based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with 9 degrees of freedom is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 2\times (1-P(T\le |{{t}_{0}}|)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-P(T\le 2.3348)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-0.9778) \\ &lt;br /&gt;
= &amp;amp; 0.0444  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, the means are significantly different at 90% confidence. Bounds on the difference between other treatment pairs can be obtained in a similar manner and it is concluded that all treatments are significantly different.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_4.png|center|772px|Mean Comparisons table for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
&lt;br /&gt;
Plots of residuals, &amp;lt;math&amp;gt;{{e}_{ij}}\,\!&amp;lt;/math&amp;gt;, similar to the ones discussed in the previous chapters on regression, are used to ensure that the assumptions associated with the ANOVA model are not violated.  The ANOVA model assumes that the random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are normally and independently distributed with the same variance for each treatment. The normality assumption can be checked by obtaining a normal probability plot of the residuals. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equality of variance is checked by plotting residuals against the treatments and the treatment averages, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;  (also referred to as fitted values), and inspecting the spread in the residuals. If a pattern is seen in these plots, then this indicates the need to use a suitable transformation on the response that will ensure variance equality. Box-Cox transformations are discussed in the next section. To check for independence of the random error terms residuals are plotted against time or run-order to ensure that a pattern does not exist in these plots. Residual plots for the given example are shown in the following two figures. The plots show that the assumptions associated with the ANOVA model are not violated.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_5.png|center|650px|Normal probability plot of residuals for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_6.png|center|650px|Plot of residuals against fitted values for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Box-Cox Method===&lt;br /&gt;
&lt;br /&gt;
Transformations on the response may be used when residual plots for an experiment show a pattern. This indicates that the equality of variance does not hold for the residuals of the given model. The Box-Cox method can be used to automatically identify a suitable power transformation for the data based on the relationship:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{*}}={{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is determined using the given data such that &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is minimized. The values of &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; are not used as is because of issues related to calculation or comparison of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values for different values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. For example, for &amp;lt;math&amp;gt;\lambda =0\,\!&amp;lt;/math&amp;gt; all response values will become 1. Therefore, the following relationship is used to obtain &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y^{\lambda }=\left\{ &lt;br /&gt;
\begin{array}{cc}&lt;br /&gt;
\frac{y^{\lambda }-1}{\lambda \dot{y}^{\lambda -1}} &amp;amp; \lambda \neq 0 \\ &lt;br /&gt;
\dot{y}\ln y &amp;amp; \lambda =0&lt;br /&gt;
\end{array}&lt;br /&gt;
\right.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\dot{y}=\exp \left[ \frac{1}{n}\sum\limits_{i=1}^{n}\ln \left( y_{i}\right) \right]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Once all &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; values are obtained for a value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, the corresponding &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for these values is obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;. The process is repeated for a number of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values to obtain a plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is selected as the required transformation for the given data. The DOE folio plots &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values because the range of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values is large and if this is not done, all values cannot be displayed on the same plot. The range of search for the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value in the software is from &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt;, because larger values of of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are usually not meaningful. The DOE folio also displays a recommended transformation based on the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value obtained as per the second table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.2.png|center|433px|Recommended Box-Cox power transformations.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Confidence intervals on the selected &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values are also available. Let &amp;lt;math&amp;gt;S{{S}_{E}}(\lambda )\,\!&amp;lt;/math&amp;gt; be the value of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; corresponding to the selected value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then, to calculate the 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence intervals on &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, we need to calculate &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}^{*}}=S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The required limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are the two values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the value &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; (on the plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;). If the limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; do not include the value of one, then the transformation is applicable for the given data.&lt;br /&gt;
Note that the power transformations are not defined for response values that are negative or zero. The DOE folio deals with negative and zero response values using the following equations (that involve addition of a suitable quantity to all of the response values if a zero or negative response value is encountered). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{rll}&lt;br /&gt;
y\left( i\right) = &amp;amp; y\left( i\right) +\left\vert y_{\min }\right\vert&lt;br /&gt;
\times 1.1 &amp;amp; \text{Negative Response} \\ &lt;br /&gt;
y\left( i\right) = &amp;amp; y\left( i\right) +1 &amp;amp; \text{Zero Response}&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;math&amp;gt;{{y}_{\min }}\,\!&amp;lt;/math&amp;gt; represents the minimum response value and &amp;lt;math&amp;gt;\left| {{y}_{\min }} \right|\,\!&amp;lt;/math&amp;gt; represents the absolute value of the minimum response. &lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
To illustrate the Box-Cox method, consider the experiment given in the first table. Transformed response values for various values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be calculated using the equation for &amp;lt;math&amp;gt;{Y}^{\lambda}\,\!&amp;lt;/math&amp;gt; given in [[One_Factor_Designs#Box-Cox_Method|Box-Cox Method]]. Knowing the hat matrix, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values corresponding to each of these &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values can easily be obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values calculated for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values between &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt; for the given data are shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \lambda  &amp;amp; {} &amp;amp; S{{S}_{E}} &amp;amp; \ln S{{S}_{E}}  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   -5 &amp;amp; {} &amp;amp; 5947.8 &amp;amp; 8.6908  \\&lt;br /&gt;
   -4 &amp;amp; {} &amp;amp; 1946.4 &amp;amp; 7.5737  \\&lt;br /&gt;
   -3 &amp;amp; {} &amp;amp; 696.5 &amp;amp; 6.5461  \\&lt;br /&gt;
   -2 &amp;amp; {} &amp;amp; 282.2 &amp;amp; 5.6425  \\&lt;br /&gt;
   -1 &amp;amp; {} &amp;amp; 135.8 &amp;amp; 4.9114  \\&lt;br /&gt;
   0 &amp;amp; {} &amp;amp; 83.9 &amp;amp; 4.4299  \\&lt;br /&gt;
   1 &amp;amp; {} &amp;amp; 74.5 &amp;amp; 4.3108  \\&lt;br /&gt;
   2 &amp;amp; {} &amp;amp; 101.0 &amp;amp; 4.6154  \\&lt;br /&gt;
   3 &amp;amp; {} &amp;amp; 190.4 &amp;amp; 5.2491  \\&lt;br /&gt;
   4 &amp;amp; {} &amp;amp; 429.5 &amp;amp; 6.0627  \\&lt;br /&gt;
   5 &amp;amp; {} &amp;amp; 1057.6 &amp;amp; -6.9638  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A plot of &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for various &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values, as obtained from the DOE folio, is shown in the following figure. The value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; that gives the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is identified as 0.7841. The &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; value corresponding to this value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is 73.74. A 90% confidence interval on this &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value is calculated as follows. &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; can be obtained as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}^{*}}= &amp;amp; S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{t_{0.05,9}^{2}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{{{1.833}^{2}}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 101.27  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\ln S{{S}^{*}}=4.6178\,\!&amp;lt;/math&amp;gt;. The &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values corresponding to this value from the following figure are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0054\,\!&amp;lt;/math&amp;gt;. Therefore, the 90% confidence limits on are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0054\,\!&amp;lt;/math&amp;gt;. Since the confidence limits include the value of 1, this indicates that a transformation is not required for the data in the first table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:doe6_7.png|center|650px|Box-Cox power transformation plot for the data in the first table.|link=]]&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=One_Factor_Designs&amp;diff=65266</id>
		<title>One Factor Designs</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=One_Factor_Designs&amp;diff=65266"/>
		<updated>2017-08-28T19:08:56Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Box-Cox Method */  fixed equation for geometric mean&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Doebook|5}}&lt;br /&gt;
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], the analysis of observational studies involves the use of regression models. The analysis of experimental studies involves the use of analysis of variance (ANOVA) models. For a comparison of the two models see [[One_Factor_Designs#Fitting_ANOVA_Models|Fitting ANOVA Models]]. In single factor experiments, ANOVA models are used to compare the mean response values at different levels of the factor. Each level of the factor is investigated to see if the response is significantly different from the response at other levels of the factor.  The analysis of single factor experiments is often referred to as &#039;&#039;one-way ANOVA&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
To illustrate the use of ANOVA models in the analysis of experiments, consider a single factor experiment where the analyst wants to see if the surface finish of certain parts is affected by the speed of a lathe machine. Data is collected for three speeds (or three treatments). Each treatment is replicated four times. Therefore, this experiment design is balanced. Surface finish values recorded using randomization are shown in the following table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.1.png|center|410px|Surface finish values for three speeds of a lathe machine.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be stated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}={{\mu }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model assumes that the response at each factor level, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, is the sum of the mean response at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, and a random error term, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. The subscript &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; denotes the factor level while the subscript &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; denotes the replicate. If there are &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of the factor and &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates at each level then &amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j=1,2,...,m\,\!&amp;lt;/math&amp;gt;. The random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are assumed to be normally and independently distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. Therefore, the response at each level can be thought of as a normally distributed population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and constant variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. The equation given above is referred to as the &#039;&#039;means model&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The ANOVA model of the means model can also be written using &amp;lt;math&amp;gt;{{\mu }_{i}}=\mu +{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the effect due to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Such an ANOVA model is called the &#039;&#039;effects model&#039;&#039;.  In the effects models the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, represent the deviations from the overall mean, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;. Therefore, the following constraint exists on the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;s: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Fitting ANOVA Models===&lt;br /&gt;
&lt;br /&gt;
To fit ANOVA models and carry out hypothesis testing in single factor experiments, it is convenient to express the effects model of the effects model in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt; (that was used for multiple linear regression models in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This can be done as shown next. Using the effects model, the ANOVA model for the single factor experiment in the first table can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment effect. There are three treatments in the first table (500, 600 and 700). Therefore, there are three treatment effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{3}}\,\!&amp;lt;/math&amp;gt;. The following constraint exists for these effects:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   } {{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the first treatment, the ANOVA model for the single factor experiment in the above table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{1j}}=\mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+0\cdot {{\tau }_{3}}+{{\epsilon }_{1j}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;, the model for the first treatment is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}-0\cdot ({{\tau }_{1}}+{{\tau }_{2}})+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{or   }{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Models for the second and third treatments can be obtained in a similar way. The models for the three treatments are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{First Treatment}: &amp;amp; {{Y}_{1j}}=1\cdot \mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{Second Treatment}: &amp;amp; {{Y}_{2j}}=1\cdot \mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{2j}} \\ &lt;br /&gt;
\text{Third Treatment}: &amp;amp; {{Y}_{3j}}=1\cdot \mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{3j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coefficients of the treatment effects &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; can be expressed using two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the indicator variables &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, the ANOVA model for the data in the first table now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{x}_{1}}\cdot {{\tau }_{1}}+{{x}_{2}}\cdot {{\tau }_{2}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation can be rewritten by including subscripts &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; (for the level of the factor) and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; (for the replicate number) as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{x}_{i1}}\cdot {{\tau }_{1}}+{{x}_{i2}}\cdot {{\tau }_{2}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation given above represents the &amp;quot;regression version&amp;quot; of the ANOVA model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Treat Numerical Factors as Qualitative or Quantitative?===&lt;br /&gt;
&lt;br /&gt;
It can be seen from the equation given above that in an ANOVA model each factor is treated as a qualitative factor. In the present example the factor, lathe speed, is a quantitative factor with three levels. But the ANOVA model treats this factor as a qualitative factor with three levels. Therefore, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required to represent this factor. &lt;br /&gt;
&lt;br /&gt;
Note that in a regression model a variable can either be treated as a quantitative or a qualitative variable. The factor, lathe speed, would be used as a quantitative factor and represented with a single predictor variable in a regression model. For example, if a first order model were to be fitted to the data in the first table, then the regression model would take the form &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. If a second order regression model were to be fitted, the regression model would be &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}x_{i1}^{2}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. Notice that unlike these regression models, the regression version of the ANOVA model does not make any assumption about the nature of relationship between the response and the factor being investigated.&lt;br /&gt;
&lt;br /&gt;
The choice of treating a particular factor as a quantitative or qualitative variable depends on the objective of the experimenter. In the case of the data of the first table, the objective of the experimenter is to compare the levels of the factor to see if change in the levels leads to a significant change in the response.  The objective is not to make predictions on the response for a given level of the factor. Therefore, the factor is treated as a qualitative factor in this case. If the objective of the experimenter were prediction or optimization, the experimenter would focus on aspects such as the nature of relationship between the factor, lathe speed, and the response, surface finish, so that the factor should be modeled as a quantitative factor to make accurate predictions.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;i&amp;gt;Y&amp;lt;/i&amp;gt; = &amp;lt;i&amp;gt;X&amp;amp;Beta;&amp;lt;/i&amp;gt; + &amp;lt;i&amp;gt;&amp;amp;epsilon;&amp;lt;/i&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be expanded for the three treatments and four replicates of the data in the first table as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{11}}= &amp;amp; 6=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{11}}\text{    Level 1, Replicate 1} \\ &lt;br /&gt;
{{Y}_{21}}= &amp;amp; 13=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{21}}\text{  Level 2, Replicate 1} \\ &lt;br /&gt;
{{Y}_{31}}= &amp;amp; 23=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{31}}\text{  Level 3, Replicate 1} \\ &lt;br /&gt;
{{Y}_{12}}= &amp;amp; 13=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{12}}\text{  Level 1, Replicate 2} \\ &lt;br /&gt;
{{Y}_{22}}= &amp;amp; 16=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{22}}\text{  Level 2, Replicate 2} \\ &lt;br /&gt;
{{Y}_{32}}= &amp;amp; 20=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{32}}\text{  Level 3, Replicate 2} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; ... \\ &lt;br /&gt;
{{Y}_{34}}= &amp;amp; 18=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{34}}\text{  Level 3, Replicate 4}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The corresponding matrix notation is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{11}}  \\&lt;br /&gt;
   {{Y}_{21}}  \\&lt;br /&gt;
   {{Y}_{31}}  \\&lt;br /&gt;
   {{Y}_{12}}  \\&lt;br /&gt;
   {{Y}_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{34}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y= &amp;amp; X\beta +\epsilon  \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   23  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   16  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The matrices &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are used in the calculation of the sum of squares in the next section. The data in the first table can be entered into the DOE folio as shown in the figure below.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_1.png|center|671px|Single factor experiment design for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Test in Single Factor Experiments===&lt;br /&gt;
&lt;br /&gt;
The hypothesis test in single factor experiments examines the ANOVA model to see if the response at any level of the investigated factor is significantly different from that at the other levels. If this is not the case and the response at all levels is not significantly different, then it can be concluded that the investigated factor does not affect the response. The test on the ANOVA model is carried out by checking to see if any of the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, are non-zero. The test is similar to the test of significance of regression mentioned in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] in the context of regression models. The hypotheses statements for this test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0 \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\tau }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test for &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is carried out using the following statistic:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{TR}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the ANOVA model and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. Note that in the case of ANOVA models we use the notation &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment mean square) for the model mean square and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment sum of squares) for the model sum of squares (instead of &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression mean square, and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression sum of squares, used in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This is done to indicate that the model under consideration is the ANOVA model and not the regression model. The calculations to obtain &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; are identical to the calculations to obtain &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt; explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The sum of squares to obtain the statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt; can be calculated as explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. Using the data in the first table, the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.1667 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.1667 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.1667  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 232.1667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the number of levels of the factor, &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; represents the replicates at each level, &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; represents the vector of the response values, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; represents the matrix of ones. (For details on each of these terms, refer to [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].)&lt;br /&gt;
Since two effect terms, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are used in the regression version of the ANOVA model, the degrees of freedom associated with the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, is two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be obtained as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.9167 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.9167 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.9167  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 306.6667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;I\,\!&amp;lt;/math&amp;gt; is the identity matrix. Since there are 12 data points in all, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; is 11. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{T}})=11\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 306.6667-232.1667 \\ &lt;br /&gt;
= &amp;amp; 74.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 11-2 \\ &lt;br /&gt;
= &amp;amp; 9  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic can now be calculated using the equation given in [[One_Factor_Designs#Hypothesis_Test_in_Single_Factor_Experiments|Hypothesis Test in Single Factor Experiments]] as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{0}}= &amp;amp; \frac{M{{S}_{TR}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{TR}}/dof(S{{S}_{TR}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{232.1667/2}{74.5/9} \\ &lt;br /&gt;
= &amp;amp; 14.0235  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value for the statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 9 degrees of freedom in the denominator is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{f}_{0}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.9983 \\ &lt;br /&gt;
= &amp;amp; 0.0017  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that change in the lathe speed has a significant effect on the surface finish. The Weibull++ DOE folio displays these results in the ANOVA table, as shown in the figure below. The values of S and R-sq are the standard error and the coefficient of determination for the model, respectively. These values are explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] and indicate how well the model fits the data. The values in the figure below indicate that the fit of the ANOVA model is fair.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_2.png|center|748px|ANOVA table for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the &#039;&#039;i&#039;&#039;&amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; Treatment Mean===&lt;br /&gt;
&lt;br /&gt;
The response at each treatment of a single factor experiment can be assumed to be a normal population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt; provided that the error terms can be assumed to be normally distributed. A point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is the average response at each treatment, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;. Since this is a sample average, the associated variance is &amp;lt;math&amp;gt;{{\sigma }^{2}}/{{m}_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;{{m}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of replicates at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment. Therefore, the confidence interval on &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution. Recall from [[Statistical_Background_on_DOE| Statistical Background on DOE]] (inference on population mean when variance is unknown) that: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}/{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{M{{S}_{E}}/{{m}_{i}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
has a &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with degrees of freedom &amp;lt;math&amp;gt;=dof(S{{S}_{E}})\,\!&amp;lt;/math&amp;gt;. Therefore, a 100 (&amp;lt;math&amp;gt;1-\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment mean, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, for the first treatment of the lathe speed we have:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}= &amp;amp; {{{\bar{y}}}_{1\cdot }} \\ &lt;br /&gt;
= &amp;amp; \frac{6+13+7+8}{4} \\ &lt;br /&gt;
= &amp;amp; 8.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the DOE folio, this value is displayed as the Estimated Mean for the first level, as shown in the Data Summary table in the figure below. The value displayed as the standard deviation for this level is simply the sample standard deviation calculated using the observations corresponding to this level.  The 90% confidence interval for this treatment is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{M{{S}_{E}}}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833\sqrt{\frac{(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833(1.44) \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 2.64  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}\,\!&amp;lt;/math&amp;gt; are 5.9 and 11.1, respectively. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_3.png|center|594px|Data Summary table for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the Difference in Two Treatment Means===&lt;br /&gt;
&lt;br /&gt;
The confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is used to compare two levels of the factor at a given significance. If the confidence interval does not include the value of zero, it is concluded that the two levels of the factor are significantly different. The point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt;. The variance for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})= &amp;amp; var({{{\bar{y}}}_{i\cdot }})+var({{{\bar{y}}}_{j\cdot }}) \\ &lt;br /&gt;
= &amp;amp; {{\sigma }^{2}}/{{m}_{i}}+{{\sigma }^{2}}/{{m}_{j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For balanced designs all &amp;lt;math&amp;gt;{{m}_{i}}=m\,\!&amp;lt;/math&amp;gt;. Therefore:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})=2{{\sigma }^{2}}/m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The standard deviation for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; can be obtained by taking the square root of &amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})\,\!&amp;lt;/math&amp;gt; and is referred to as the pooled standard error:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for the difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2{{{\hat{\sigma }}}^{2}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2M{{S}_{E}}/m}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then a 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, an estimate of the difference in the first and second treatment means of the lathe speed, &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}-{{{\hat{\mu }}}_{2}}= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }} \\ &lt;br /&gt;
= &amp;amp; 8.5-13.25 \\ &lt;br /&gt;
= &amp;amp; -4.75  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The pooled standard error for this difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2M{{S}_{E}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 2.0344  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To test &amp;lt;math&amp;gt;{{H}_{0}}:{{\mu }_{1}}-{{\mu }_{2}}=0\,\!&amp;lt;/math&amp;gt;, the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}-({{\mu }_{1}}-{{\mu }_{2}})}{\sqrt{2M{{S}_{E}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75-(0)}{\sqrt{\tfrac{2(74.5/9)}{4}}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75}{2.0344} \\ &lt;br /&gt;
= &amp;amp; -2.3348  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the DOE folio, the value of the statistic is displayed in the Mean Comparisons table under the column T Value as shown in the figure below. The 90% confidence interval on the difference &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833\sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833(2.0344) \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 3.729  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hence the 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-8.479\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1.021\,\!&amp;lt;/math&amp;gt;, respectively. These values are displayed under the Low CI and High CI columns in the following figure. Since the confidence interval for this pair of means does not included zero, it can be concluded that these means are significantly different at 90% confidence. This conclusion can also be arrived at using the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value noting that the hypothesis is two-sided. The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to the statistic &amp;lt;math&amp;gt;{{t}_{0}}=-2.3348\,\!&amp;lt;/math&amp;gt;, based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with 9 degrees of freedom is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 2\times (1-P(T\le |{{t}_{0}}|)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-P(T\le 2.3348)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-0.9778) \\ &lt;br /&gt;
= &amp;amp; 0.0444  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, the means are significantly different at 90% confidence. Bounds on the difference between other treatment pairs can be obtained in a similar manner and it is concluded that all treatments are significantly different.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_4.png|center|772px|Mean Comparisons table for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
&lt;br /&gt;
Plots of residuals, &amp;lt;math&amp;gt;{{e}_{ij}}\,\!&amp;lt;/math&amp;gt;, similar to the ones discussed in the previous chapters on regression, are used to ensure that the assumptions associated with the ANOVA model are not violated.  The ANOVA model assumes that the random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are normally and independently distributed with the same variance for each treatment. The normality assumption can be checked by obtaining a normal probability plot of the residuals. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equality of variance is checked by plotting residuals against the treatments and the treatment averages, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;  (also referred to as fitted values), and inspecting the spread in the residuals. If a pattern is seen in these plots, then this indicates the need to use a suitable transformation on the response that will ensure variance equality. Box-Cox transformations are discussed in the next section. To check for independence of the random error terms residuals are plotted against time or run-order to ensure that a pattern does not exist in these plots. Residual plots for the given example are shown in the following two figures. The plots show that the assumptions associated with the ANOVA model are not violated.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_5.png|center|650px|Normal probability plot of residuals for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_6.png|center|650px|Plot of residuals against fitted values for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Box-Cox Method===&lt;br /&gt;
&lt;br /&gt;
Transformations on the response may be used when residual plots for an experiment show a pattern. This indicates that the equality of variance does not hold for the residuals of the given model. The Box-Cox method can be used to automatically identify a suitable power transformation for the data based on the relationship:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{*}}={{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is determined using the given data such that &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is minimized. The values of &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; are not used as is because of issues related to calculation or comparison of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values for different values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. For example, for &amp;lt;math&amp;gt;\lambda =0\,\!&amp;lt;/math&amp;gt; all response values will become 1. Therefore, the following relationship is used to obtain &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y^{\lambda }=\left\{ &lt;br /&gt;
\begin{array}{cc}&lt;br /&gt;
\frac{y^{\lambda }-1}{\lambda \dot{y}^{\lambda -1}} &amp;amp; \lambda \neq 0 \\ &lt;br /&gt;
\dot{y}\ln y &amp;amp; \lambda =0&lt;br /&gt;
\end{array}&lt;br /&gt;
\right.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\dot{y}=\exp \left[ \frac{1}{n}\sum\limits_{i=1}^{n}\ln \left( y_{i}\right) \right]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Once all &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; values are obtained for a value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, the corresponding &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for these values is obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;. The process is repeated for a number of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values to obtain a plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is selected as the required transformation for the given data. The DOE folio plots &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values because the range of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values is large and if this is not done, all values cannot be displayed on the same plot. The range of search for the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value in the software is from &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt;, because larger values of of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are usually not meaningful. The DOE folio also displays a recommended transformation based on the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value obtained as per the second table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.2.png|center|433px|Recommended Box-Cox power transformations.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Confidence intervals on the selected &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values are also available. Let &amp;lt;math&amp;gt;S{{S}_{E}}(\lambda )\,\!&amp;lt;/math&amp;gt; be the value of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; corresponding to the selected value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then, to calculate the 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence intervals on &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, we need to calculate &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}^{*}}=S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The required limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are the two values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the value &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; (on the plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;). If the limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; do not include the value of one, then the transformation is applicable for the given data.&lt;br /&gt;
Note that the power transformations are not defined for response values that are negative or zero. The DOE folio deals with negative and zero response values using the following equations (that involve addition of a suitable quantity to all of the response values if a zero or negative response value is encountered). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 y(i)&amp;amp; =  y(i)+\left| {{y}_{\min }} \right|\times 1.1 &amp;amp; \text{Negative Response} \\ &lt;br /&gt;
 y(i)&amp;amp; =  y(i)+1                                      &amp;amp; \text{Zero Response}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;math&amp;gt;{{y}_{\min }}\,\!&amp;lt;/math&amp;gt; represents the minimum response value and &amp;lt;math&amp;gt;\left| {{y}_{\min }} \right|\,\!&amp;lt;/math&amp;gt; represents the absolute value of the minimum response. &lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
To illustrate the Box-Cox method, consider the experiment given in the first table. Transformed response values for various values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be calculated using the equation for &amp;lt;math&amp;gt;{Y}^{\lambda}\,\!&amp;lt;/math&amp;gt; given in [[One_Factor_Designs#Box-Cox_Method|Box-Cox Method]]. Knowing the hat matrix, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values corresponding to each of these &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values can easily be obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values calculated for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values between &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt; for the given data are shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \lambda  &amp;amp; {} &amp;amp; S{{S}_{E}} &amp;amp; \ln S{{S}_{E}}  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   -5 &amp;amp; {} &amp;amp; 5947.8 &amp;amp; 8.6908  \\&lt;br /&gt;
   -4 &amp;amp; {} &amp;amp; 1946.4 &amp;amp; 7.5737  \\&lt;br /&gt;
   -3 &amp;amp; {} &amp;amp; 696.5 &amp;amp; 6.5461  \\&lt;br /&gt;
   -2 &amp;amp; {} &amp;amp; 282.2 &amp;amp; 5.6425  \\&lt;br /&gt;
   -1 &amp;amp; {} &amp;amp; 135.8 &amp;amp; 4.9114  \\&lt;br /&gt;
   0 &amp;amp; {} &amp;amp; 83.9 &amp;amp; 4.4299  \\&lt;br /&gt;
   1 &amp;amp; {} &amp;amp; 74.5 &amp;amp; 4.3108  \\&lt;br /&gt;
   2 &amp;amp; {} &amp;amp; 101.0 &amp;amp; 4.6154  \\&lt;br /&gt;
   3 &amp;amp; {} &amp;amp; 190.4 &amp;amp; 5.2491  \\&lt;br /&gt;
   4 &amp;amp; {} &amp;amp; 429.5 &amp;amp; 6.0627  \\&lt;br /&gt;
   5 &amp;amp; {} &amp;amp; 1057.6 &amp;amp; -6.9638  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A plot of &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for various &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values, as obtained from the DOE folio, is shown in the following figure. The value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; that gives the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is identified as 0.7841. The &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; value corresponding to this value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is 73.74. A 90% confidence interval on this &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value is calculated as follows. &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; can be obtained as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}^{*}}= &amp;amp; S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{t_{0.05,9}^{2}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{{{1.833}^{2}}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 101.27  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\ln S{{S}^{*}}=4.6178\,\!&amp;lt;/math&amp;gt;. The &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values corresponding to this value from the following figure are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0054\,\!&amp;lt;/math&amp;gt;. Therefore, the 90% confidence limits on are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0054\,\!&amp;lt;/math&amp;gt;. Since the confidence limits include the value of 1, this indicates that a transformation is not required for the data in the first table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:doe6_7.png|center|650px|Box-Cox power transformation plot for the data in the first table.|link=]]&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=One_Factor_Designs&amp;diff=65265</id>
		<title>One Factor Designs</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=One_Factor_Designs&amp;diff=65265"/>
		<updated>2017-08-28T19:03:53Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Box-Cox Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Doebook|5}}&lt;br /&gt;
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], the analysis of observational studies involves the use of regression models. The analysis of experimental studies involves the use of analysis of variance (ANOVA) models. For a comparison of the two models see [[One_Factor_Designs#Fitting_ANOVA_Models|Fitting ANOVA Models]]. In single factor experiments, ANOVA models are used to compare the mean response values at different levels of the factor. Each level of the factor is investigated to see if the response is significantly different from the response at other levels of the factor.  The analysis of single factor experiments is often referred to as &#039;&#039;one-way ANOVA&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
To illustrate the use of ANOVA models in the analysis of experiments, consider a single factor experiment where the analyst wants to see if the surface finish of certain parts is affected by the speed of a lathe machine. Data is collected for three speeds (or three treatments). Each treatment is replicated four times. Therefore, this experiment design is balanced. Surface finish values recorded using randomization are shown in the following table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.1.png|center|410px|Surface finish values for three speeds of a lathe machine.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be stated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}={{\mu }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model assumes that the response at each factor level, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, is the sum of the mean response at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, and a random error term, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. The subscript &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; denotes the factor level while the subscript &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; denotes the replicate. If there are &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of the factor and &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates at each level then &amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j=1,2,...,m\,\!&amp;lt;/math&amp;gt;. The random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are assumed to be normally and independently distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. Therefore, the response at each level can be thought of as a normally distributed population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and constant variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. The equation given above is referred to as the &#039;&#039;means model&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The ANOVA model of the means model can also be written using &amp;lt;math&amp;gt;{{\mu }_{i}}=\mu +{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the effect due to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Such an ANOVA model is called the &#039;&#039;effects model&#039;&#039;.  In the effects models the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, represent the deviations from the overall mean, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;. Therefore, the following constraint exists on the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;s: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Fitting ANOVA Models===&lt;br /&gt;
&lt;br /&gt;
To fit ANOVA models and carry out hypothesis testing in single factor experiments, it is convenient to express the effects model of the effects model in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt; (that was used for multiple linear regression models in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This can be done as shown next. Using the effects model, the ANOVA model for the single factor experiment in the first table can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment effect. There are three treatments in the first table (500, 600 and 700). Therefore, there are three treatment effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{3}}\,\!&amp;lt;/math&amp;gt;. The following constraint exists for these effects:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   } {{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the first treatment, the ANOVA model for the single factor experiment in the above table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{1j}}=\mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+0\cdot {{\tau }_{3}}+{{\epsilon }_{1j}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;, the model for the first treatment is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}-0\cdot ({{\tau }_{1}}+{{\tau }_{2}})+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{or   }{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Models for the second and third treatments can be obtained in a similar way. The models for the three treatments are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{First Treatment}: &amp;amp; {{Y}_{1j}}=1\cdot \mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{Second Treatment}: &amp;amp; {{Y}_{2j}}=1\cdot \mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{2j}} \\ &lt;br /&gt;
\text{Third Treatment}: &amp;amp; {{Y}_{3j}}=1\cdot \mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{3j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coefficients of the treatment effects &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; can be expressed using two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the indicator variables &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, the ANOVA model for the data in the first table now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{x}_{1}}\cdot {{\tau }_{1}}+{{x}_{2}}\cdot {{\tau }_{2}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation can be rewritten by including subscripts &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; (for the level of the factor) and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; (for the replicate number) as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{x}_{i1}}\cdot {{\tau }_{1}}+{{x}_{i2}}\cdot {{\tau }_{2}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation given above represents the &amp;quot;regression version&amp;quot; of the ANOVA model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Treat Numerical Factors as Qualitative or Quantitative?===&lt;br /&gt;
&lt;br /&gt;
It can be seen from the equation given above that in an ANOVA model each factor is treated as a qualitative factor. In the present example the factor, lathe speed, is a quantitative factor with three levels. But the ANOVA model treats this factor as a qualitative factor with three levels. Therefore, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required to represent this factor. &lt;br /&gt;
&lt;br /&gt;
Note that in a regression model a variable can either be treated as a quantitative or a qualitative variable. The factor, lathe speed, would be used as a quantitative factor and represented with a single predictor variable in a regression model. For example, if a first order model were to be fitted to the data in the first table, then the regression model would take the form &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. If a second order regression model were to be fitted, the regression model would be &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}x_{i1}^{2}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. Notice that unlike these regression models, the regression version of the ANOVA model does not make any assumption about the nature of relationship between the response and the factor being investigated.&lt;br /&gt;
&lt;br /&gt;
The choice of treating a particular factor as a quantitative or qualitative variable depends on the objective of the experimenter. In the case of the data of the first table, the objective of the experimenter is to compare the levels of the factor to see if change in the levels leads to a significant change in the response.  The objective is not to make predictions on the response for a given level of the factor. Therefore, the factor is treated as a qualitative factor in this case. If the objective of the experimenter were prediction or optimization, the experimenter would focus on aspects such as the nature of relationship between the factor, lathe speed, and the response, surface finish, so that the factor should be modeled as a quantitative factor to make accurate predictions.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;i&amp;gt;Y&amp;lt;/i&amp;gt; = &amp;lt;i&amp;gt;X&amp;amp;Beta;&amp;lt;/i&amp;gt; + &amp;lt;i&amp;gt;&amp;amp;epsilon;&amp;lt;/i&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be expanded for the three treatments and four replicates of the data in the first table as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{11}}= &amp;amp; 6=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{11}}\text{    Level 1, Replicate 1} \\ &lt;br /&gt;
{{Y}_{21}}= &amp;amp; 13=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{21}}\text{  Level 2, Replicate 1} \\ &lt;br /&gt;
{{Y}_{31}}= &amp;amp; 23=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{31}}\text{  Level 3, Replicate 1} \\ &lt;br /&gt;
{{Y}_{12}}= &amp;amp; 13=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{12}}\text{  Level 1, Replicate 2} \\ &lt;br /&gt;
{{Y}_{22}}= &amp;amp; 16=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{22}}\text{  Level 2, Replicate 2} \\ &lt;br /&gt;
{{Y}_{32}}= &amp;amp; 20=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{32}}\text{  Level 3, Replicate 2} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; ... \\ &lt;br /&gt;
{{Y}_{34}}= &amp;amp; 18=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{34}}\text{  Level 3, Replicate 4}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The corresponding matrix notation is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{11}}  \\&lt;br /&gt;
   {{Y}_{21}}  \\&lt;br /&gt;
   {{Y}_{31}}  \\&lt;br /&gt;
   {{Y}_{12}}  \\&lt;br /&gt;
   {{Y}_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{34}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y= &amp;amp; X\beta +\epsilon  \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   23  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   16  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The matrices &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are used in the calculation of the sum of squares in the next section. The data in the first table can be entered into the DOE folio as shown in the figure below.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_1.png|center|671px|Single factor experiment design for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Test in Single Factor Experiments===&lt;br /&gt;
&lt;br /&gt;
The hypothesis test in single factor experiments examines the ANOVA model to see if the response at any level of the investigated factor is significantly different from that at the other levels. If this is not the case and the response at all levels is not significantly different, then it can be concluded that the investigated factor does not affect the response. The test on the ANOVA model is carried out by checking to see if any of the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, are non-zero. The test is similar to the test of significance of regression mentioned in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] in the context of regression models. The hypotheses statements for this test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0 \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\tau }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test for &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is carried out using the following statistic:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{TR}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the ANOVA model and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. Note that in the case of ANOVA models we use the notation &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment mean square) for the model mean square and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment sum of squares) for the model sum of squares (instead of &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression mean square, and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression sum of squares, used in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This is done to indicate that the model under consideration is the ANOVA model and not the regression model. The calculations to obtain &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; are identical to the calculations to obtain &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt; explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The sum of squares to obtain the statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt; can be calculated as explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. Using the data in the first table, the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.1667 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.1667 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.1667  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 232.1667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the number of levels of the factor, &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; represents the replicates at each level, &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; represents the vector of the response values, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; represents the matrix of ones. (For details on each of these terms, refer to [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].)&lt;br /&gt;
Since two effect terms, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are used in the regression version of the ANOVA model, the degrees of freedom associated with the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, is two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be obtained as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.9167 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.9167 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.9167  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 306.6667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;I\,\!&amp;lt;/math&amp;gt; is the identity matrix. Since there are 12 data points in all, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; is 11. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{T}})=11\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 306.6667-232.1667 \\ &lt;br /&gt;
= &amp;amp; 74.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 11-2 \\ &lt;br /&gt;
= &amp;amp; 9  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic can now be calculated using the equation given in [[One_Factor_Designs#Hypothesis_Test_in_Single_Factor_Experiments|Hypothesis Test in Single Factor Experiments]] as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{0}}= &amp;amp; \frac{M{{S}_{TR}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{TR}}/dof(S{{S}_{TR}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{232.1667/2}{74.5/9} \\ &lt;br /&gt;
= &amp;amp; 14.0235  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value for the statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 9 degrees of freedom in the denominator is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{f}_{0}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.9983 \\ &lt;br /&gt;
= &amp;amp; 0.0017  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that change in the lathe speed has a significant effect on the surface finish. The Weibull++ DOE folio displays these results in the ANOVA table, as shown in the figure below. The values of S and R-sq are the standard error and the coefficient of determination for the model, respectively. These values are explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] and indicate how well the model fits the data. The values in the figure below indicate that the fit of the ANOVA model is fair.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_2.png|center|748px|ANOVA table for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the &#039;&#039;i&#039;&#039;&amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; Treatment Mean===&lt;br /&gt;
&lt;br /&gt;
The response at each treatment of a single factor experiment can be assumed to be a normal population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt; provided that the error terms can be assumed to be normally distributed. A point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is the average response at each treatment, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;. Since this is a sample average, the associated variance is &amp;lt;math&amp;gt;{{\sigma }^{2}}/{{m}_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;{{m}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of replicates at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment. Therefore, the confidence interval on &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution. Recall from [[Statistical_Background_on_DOE| Statistical Background on DOE]] (inference on population mean when variance is unknown) that: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}/{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{M{{S}_{E}}/{{m}_{i}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
has a &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with degrees of freedom &amp;lt;math&amp;gt;=dof(S{{S}_{E}})\,\!&amp;lt;/math&amp;gt;. Therefore, a 100 (&amp;lt;math&amp;gt;1-\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment mean, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, for the first treatment of the lathe speed we have:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}= &amp;amp; {{{\bar{y}}}_{1\cdot }} \\ &lt;br /&gt;
= &amp;amp; \frac{6+13+7+8}{4} \\ &lt;br /&gt;
= &amp;amp; 8.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the DOE folio, this value is displayed as the Estimated Mean for the first level, as shown in the Data Summary table in the figure below. The value displayed as the standard deviation for this level is simply the sample standard deviation calculated using the observations corresponding to this level.  The 90% confidence interval for this treatment is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{M{{S}_{E}}}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833\sqrt{\frac{(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833(1.44) \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 2.64  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}\,\!&amp;lt;/math&amp;gt; are 5.9 and 11.1, respectively. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_3.png|center|594px|Data Summary table for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the Difference in Two Treatment Means===&lt;br /&gt;
&lt;br /&gt;
The confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is used to compare two levels of the factor at a given significance. If the confidence interval does not include the value of zero, it is concluded that the two levels of the factor are significantly different. The point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt;. The variance for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})= &amp;amp; var({{{\bar{y}}}_{i\cdot }})+var({{{\bar{y}}}_{j\cdot }}) \\ &lt;br /&gt;
= &amp;amp; {{\sigma }^{2}}/{{m}_{i}}+{{\sigma }^{2}}/{{m}_{j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For balanced designs all &amp;lt;math&amp;gt;{{m}_{i}}=m\,\!&amp;lt;/math&amp;gt;. Therefore:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})=2{{\sigma }^{2}}/m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The standard deviation for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; can be obtained by taking the square root of &amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})\,\!&amp;lt;/math&amp;gt; and is referred to as the pooled standard error:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for the difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2{{{\hat{\sigma }}}^{2}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2M{{S}_{E}}/m}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then a 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, an estimate of the difference in the first and second treatment means of the lathe speed, &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}-{{{\hat{\mu }}}_{2}}= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }} \\ &lt;br /&gt;
= &amp;amp; 8.5-13.25 \\ &lt;br /&gt;
= &amp;amp; -4.75  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The pooled standard error for this difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2M{{S}_{E}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 2.0344  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To test &amp;lt;math&amp;gt;{{H}_{0}}:{{\mu }_{1}}-{{\mu }_{2}}=0\,\!&amp;lt;/math&amp;gt;, the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}-({{\mu }_{1}}-{{\mu }_{2}})}{\sqrt{2M{{S}_{E}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75-(0)}{\sqrt{\tfrac{2(74.5/9)}{4}}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75}{2.0344} \\ &lt;br /&gt;
= &amp;amp; -2.3348  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the DOE folio, the value of the statistic is displayed in the Mean Comparisons table under the column T Value as shown in the figure below. The 90% confidence interval on the difference &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833\sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833(2.0344) \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 3.729  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hence the 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-8.479\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1.021\,\!&amp;lt;/math&amp;gt;, respectively. These values are displayed under the Low CI and High CI columns in the following figure. Since the confidence interval for this pair of means does not included zero, it can be concluded that these means are significantly different at 90% confidence. This conclusion can also be arrived at using the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value noting that the hypothesis is two-sided. The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to the statistic &amp;lt;math&amp;gt;{{t}_{0}}=-2.3348\,\!&amp;lt;/math&amp;gt;, based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with 9 degrees of freedom is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 2\times (1-P(T\le |{{t}_{0}}|)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-P(T\le 2.3348)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-0.9778) \\ &lt;br /&gt;
= &amp;amp; 0.0444  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, the means are significantly different at 90% confidence. Bounds on the difference between other treatment pairs can be obtained in a similar manner and it is concluded that all treatments are significantly different.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_4.png|center|772px|Mean Comparisons table for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
&lt;br /&gt;
Plots of residuals, &amp;lt;math&amp;gt;{{e}_{ij}}\,\!&amp;lt;/math&amp;gt;, similar to the ones discussed in the previous chapters on regression, are used to ensure that the assumptions associated with the ANOVA model are not violated.  The ANOVA model assumes that the random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are normally and independently distributed with the same variance for each treatment. The normality assumption can be checked by obtaining a normal probability plot of the residuals. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equality of variance is checked by plotting residuals against the treatments and the treatment averages, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;  (also referred to as fitted values), and inspecting the spread in the residuals. If a pattern is seen in these plots, then this indicates the need to use a suitable transformation on the response that will ensure variance equality. Box-Cox transformations are discussed in the next section. To check for independence of the random error terms residuals are plotted against time or run-order to ensure that a pattern does not exist in these plots. Residual plots for the given example are shown in the following two figures. The plots show that the assumptions associated with the ANOVA model are not violated.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_5.png|center|650px|Normal probability plot of residuals for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_6.png|center|650px|Plot of residuals against fitted values for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Box-Cox Method===&lt;br /&gt;
&lt;br /&gt;
Transformations on the response may be used when residual plots for an experiment show a pattern. This indicates that the equality of variance does not hold for the residuals of the given model. The Box-Cox method can be used to automatically identify a suitable power transformation for the data based on the relationship:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{*}}={{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is determined using the given data such that &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is minimized. The values of &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; are not used as is because of issues related to calculation or comparison of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values for different values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. For example, for &amp;lt;math&amp;gt;\lambda =0\,\!&amp;lt;/math&amp;gt; all response values will become 1. Therefore, the following relationship is used to obtain &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y^{\lambda }=\left\{ &lt;br /&gt;
\begin{array}{cc}&lt;br /&gt;
\frac{y^{\lambda }-1}{\lambda \dot{y}^{\lambda -1}} &amp;amp; \lambda \neq 0 \\ &lt;br /&gt;
\dot{y}\ln y &amp;amp; \lambda =0&lt;br /&gt;
\end{array}&lt;br /&gt;
\right.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\dot{y}={{\ln }^{-1}}[(1/n)\mathop{}_{}^{}\ln y]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Once all &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; values are obtained for a value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, the corresponding &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for these values is obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;. The process is repeated for a number of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values to obtain a plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is selected as the required transformation for the given data. The DOE folio plots &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values because the range of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values is large and if this is not done, all values cannot be displayed on the same plot. The range of search for the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value in the software is from &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt;, because larger values of of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are usually not meaningful. The DOE folio also displays a recommended transformation based on the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value obtained as per the second table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.2.png|center|433px|Recommended Box-Cox power transformations.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Confidence intervals on the selected &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values are also available. Let &amp;lt;math&amp;gt;S{{S}_{E}}(\lambda )\,\!&amp;lt;/math&amp;gt; be the value of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; corresponding to the selected value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then, to calculate the 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence intervals on &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, we need to calculate &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}^{*}}=S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The required limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are the two values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the value &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; (on the plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;). If the limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; do not include the value of one, then the transformation is applicable for the given data.&lt;br /&gt;
Note that the power transformations are not defined for response values that are negative or zero. The DOE folio deals with negative and zero response values using the following equations (that involve addition of a suitable quantity to all of the response values if a zero or negative response value is encountered). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 y(i)&amp;amp; =  y(i)+\left| {{y}_{\min }} \right|\times 1.1 &amp;amp; \text{Negative Response} \\ &lt;br /&gt;
 y(i)&amp;amp; =  y(i)+1                                      &amp;amp; \text{Zero Response}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;math&amp;gt;{{y}_{\min }}\,\!&amp;lt;/math&amp;gt; represents the minimum response value and &amp;lt;math&amp;gt;\left| {{y}_{\min }} \right|\,\!&amp;lt;/math&amp;gt; represents the absolute value of the minimum response. &lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
To illustrate the Box-Cox method, consider the experiment given in the first table. Transformed response values for various values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be calculated using the equation for &amp;lt;math&amp;gt;{Y}^{\lambda}\,\!&amp;lt;/math&amp;gt; given in [[One_Factor_Designs#Box-Cox_Method|Box-Cox Method]]. Knowing the hat matrix, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values corresponding to each of these &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values can easily be obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values calculated for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values between &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt; for the given data are shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \lambda  &amp;amp; {} &amp;amp; S{{S}_{E}} &amp;amp; \ln S{{S}_{E}}  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   -5 &amp;amp; {} &amp;amp; 5947.8 &amp;amp; 8.6908  \\&lt;br /&gt;
   -4 &amp;amp; {} &amp;amp; 1946.4 &amp;amp; 7.5737  \\&lt;br /&gt;
   -3 &amp;amp; {} &amp;amp; 696.5 &amp;amp; 6.5461  \\&lt;br /&gt;
   -2 &amp;amp; {} &amp;amp; 282.2 &amp;amp; 5.6425  \\&lt;br /&gt;
   -1 &amp;amp; {} &amp;amp; 135.8 &amp;amp; 4.9114  \\&lt;br /&gt;
   0 &amp;amp; {} &amp;amp; 83.9 &amp;amp; 4.4299  \\&lt;br /&gt;
   1 &amp;amp; {} &amp;amp; 74.5 &amp;amp; 4.3108  \\&lt;br /&gt;
   2 &amp;amp; {} &amp;amp; 101.0 &amp;amp; 4.6154  \\&lt;br /&gt;
   3 &amp;amp; {} &amp;amp; 190.4 &amp;amp; 5.2491  \\&lt;br /&gt;
   4 &amp;amp; {} &amp;amp; 429.5 &amp;amp; 6.0627  \\&lt;br /&gt;
   5 &amp;amp; {} &amp;amp; 1057.6 &amp;amp; -6.9638  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A plot of &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for various &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values, as obtained from the DOE folio, is shown in the following figure. The value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; that gives the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is identified as 0.7841. The &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; value corresponding to this value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is 73.74. A 90% confidence interval on this &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value is calculated as follows. &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; can be obtained as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}^{*}}= &amp;amp; S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{t_{0.05,9}^{2}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{{{1.833}^{2}}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 101.27  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\ln S{{S}^{*}}=4.6178\,\!&amp;lt;/math&amp;gt;. The &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values corresponding to this value from the following figure are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0054\,\!&amp;lt;/math&amp;gt;. Therefore, the 90% confidence limits on are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0054\,\!&amp;lt;/math&amp;gt;. Since the confidence limits include the value of 1, this indicates that a transformation is not required for the data in the first table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:doe6_7.png|center|650px|Box-Cox power transformation plot for the data in the first table.|link=]]&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=One_Factor_Designs&amp;diff=65264</id>
		<title>One Factor Designs</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=One_Factor_Designs&amp;diff=65264"/>
		<updated>2017-08-28T19:03:27Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Box-Cox Method */  Fixed formatting on Y^lambda equation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Doebook|5}}&lt;br /&gt;
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], the analysis of observational studies involves the use of regression models. The analysis of experimental studies involves the use of analysis of variance (ANOVA) models. For a comparison of the two models see [[One_Factor_Designs#Fitting_ANOVA_Models|Fitting ANOVA Models]]. In single factor experiments, ANOVA models are used to compare the mean response values at different levels of the factor. Each level of the factor is investigated to see if the response is significantly different from the response at other levels of the factor.  The analysis of single factor experiments is often referred to as &#039;&#039;one-way ANOVA&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
To illustrate the use of ANOVA models in the analysis of experiments, consider a single factor experiment where the analyst wants to see if the surface finish of certain parts is affected by the speed of a lathe machine. Data is collected for three speeds (or three treatments). Each treatment is replicated four times. Therefore, this experiment design is balanced. Surface finish values recorded using randomization are shown in the following table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.1.png|center|410px|Surface finish values for three speeds of a lathe machine.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be stated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}={{\mu }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model assumes that the response at each factor level, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, is the sum of the mean response at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, and a random error term, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. The subscript &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; denotes the factor level while the subscript &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; denotes the replicate. If there are &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of the factor and &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates at each level then &amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j=1,2,...,m\,\!&amp;lt;/math&amp;gt;. The random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are assumed to be normally and independently distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. Therefore, the response at each level can be thought of as a normally distributed population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and constant variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. The equation given above is referred to as the &#039;&#039;means model&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The ANOVA model of the means model can also be written using &amp;lt;math&amp;gt;{{\mu }_{i}}=\mu +{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the effect due to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Such an ANOVA model is called the &#039;&#039;effects model&#039;&#039;.  In the effects models the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, represent the deviations from the overall mean, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;. Therefore, the following constraint exists on the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;s: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Fitting ANOVA Models===&lt;br /&gt;
&lt;br /&gt;
To fit ANOVA models and carry out hypothesis testing in single factor experiments, it is convenient to express the effects model of the effects model in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt; (that was used for multiple linear regression models in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This can be done as shown next. Using the effects model, the ANOVA model for the single factor experiment in the first table can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment effect. There are three treatments in the first table (500, 600 and 700). Therefore, there are three treatment effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{3}}\,\!&amp;lt;/math&amp;gt;. The following constraint exists for these effects:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   } {{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the first treatment, the ANOVA model for the single factor experiment in the above table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{1j}}=\mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+0\cdot {{\tau }_{3}}+{{\epsilon }_{1j}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;, the model for the first treatment is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}-0\cdot ({{\tau }_{1}}+{{\tau }_{2}})+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{or   }{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Models for the second and third treatments can be obtained in a similar way. The models for the three treatments are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{First Treatment}: &amp;amp; {{Y}_{1j}}=1\cdot \mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{Second Treatment}: &amp;amp; {{Y}_{2j}}=1\cdot \mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{2j}} \\ &lt;br /&gt;
\text{Third Treatment}: &amp;amp; {{Y}_{3j}}=1\cdot \mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{3j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coefficients of the treatment effects &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; can be expressed using two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the indicator variables &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, the ANOVA model for the data in the first table now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{x}_{1}}\cdot {{\tau }_{1}}+{{x}_{2}}\cdot {{\tau }_{2}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation can be rewritten by including subscripts &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; (for the level of the factor) and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; (for the replicate number) as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{x}_{i1}}\cdot {{\tau }_{1}}+{{x}_{i2}}\cdot {{\tau }_{2}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation given above represents the &amp;quot;regression version&amp;quot; of the ANOVA model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Treat Numerical Factors as Qualitative or Quantitative?===&lt;br /&gt;
&lt;br /&gt;
It can be seen from the equation given above that in an ANOVA model each factor is treated as a qualitative factor. In the present example the factor, lathe speed, is a quantitative factor with three levels. But the ANOVA model treats this factor as a qualitative factor with three levels. Therefore, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required to represent this factor. &lt;br /&gt;
&lt;br /&gt;
Note that in a regression model a variable can either be treated as a quantitative or a qualitative variable. The factor, lathe speed, would be used as a quantitative factor and represented with a single predictor variable in a regression model. For example, if a first order model were to be fitted to the data in the first table, then the regression model would take the form &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. If a second order regression model were to be fitted, the regression model would be &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}x_{i1}^{2}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. Notice that unlike these regression models, the regression version of the ANOVA model does not make any assumption about the nature of relationship between the response and the factor being investigated.&lt;br /&gt;
&lt;br /&gt;
The choice of treating a particular factor as a quantitative or qualitative variable depends on the objective of the experimenter. In the case of the data of the first table, the objective of the experimenter is to compare the levels of the factor to see if change in the levels leads to a significant change in the response.  The objective is not to make predictions on the response for a given level of the factor. Therefore, the factor is treated as a qualitative factor in this case. If the objective of the experimenter were prediction or optimization, the experimenter would focus on aspects such as the nature of relationship between the factor, lathe speed, and the response, surface finish, so that the factor should be modeled as a quantitative factor to make accurate predictions.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;i&amp;gt;Y&amp;lt;/i&amp;gt; = &amp;lt;i&amp;gt;X&amp;amp;Beta;&amp;lt;/i&amp;gt; + &amp;lt;i&amp;gt;&amp;amp;epsilon;&amp;lt;/i&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be expanded for the three treatments and four replicates of the data in the first table as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{11}}= &amp;amp; 6=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{11}}\text{    Level 1, Replicate 1} \\ &lt;br /&gt;
{{Y}_{21}}= &amp;amp; 13=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{21}}\text{  Level 2, Replicate 1} \\ &lt;br /&gt;
{{Y}_{31}}= &amp;amp; 23=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{31}}\text{  Level 3, Replicate 1} \\ &lt;br /&gt;
{{Y}_{12}}= &amp;amp; 13=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{12}}\text{  Level 1, Replicate 2} \\ &lt;br /&gt;
{{Y}_{22}}= &amp;amp; 16=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{22}}\text{  Level 2, Replicate 2} \\ &lt;br /&gt;
{{Y}_{32}}= &amp;amp; 20=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{32}}\text{  Level 3, Replicate 2} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; ... \\ &lt;br /&gt;
{{Y}_{34}}= &amp;amp; 18=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{34}}\text{  Level 3, Replicate 4}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The corresponding matrix notation is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{11}}  \\&lt;br /&gt;
   {{Y}_{21}}  \\&lt;br /&gt;
   {{Y}_{31}}  \\&lt;br /&gt;
   {{Y}_{12}}  \\&lt;br /&gt;
   {{Y}_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{34}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y= &amp;amp; X\beta +\epsilon  \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   23  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   16  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The matrices &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are used in the calculation of the sum of squares in the next section. The data in the first table can be entered into the DOE folio as shown in the figure below.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_1.png|center|671px|Single factor experiment design for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Test in Single Factor Experiments===&lt;br /&gt;
&lt;br /&gt;
The hypothesis test in single factor experiments examines the ANOVA model to see if the response at any level of the investigated factor is significantly different from that at the other levels. If this is not the case and the response at all levels is not significantly different, then it can be concluded that the investigated factor does not affect the response. The test on the ANOVA model is carried out by checking to see if any of the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, are non-zero. The test is similar to the test of significance of regression mentioned in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] in the context of regression models. The hypotheses statements for this test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0 \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\tau }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test for &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is carried out using the following statistic:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{TR}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the ANOVA model and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. Note that in the case of ANOVA models we use the notation &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment mean square) for the model mean square and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment sum of squares) for the model sum of squares (instead of &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression mean square, and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression sum of squares, used in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This is done to indicate that the model under consideration is the ANOVA model and not the regression model. The calculations to obtain &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; are identical to the calculations to obtain &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt; explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The sum of squares to obtain the statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt; can be calculated as explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. Using the data in the first table, the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.1667 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.1667 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.1667  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 232.1667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the number of levels of the factor, &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; represents the replicates at each level, &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; represents the vector of the response values, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; represents the matrix of ones. (For details on each of these terms, refer to [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].)&lt;br /&gt;
Since two effect terms, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are used in the regression version of the ANOVA model, the degrees of freedom associated with the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, is two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be obtained as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.9167 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.9167 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.9167  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 306.6667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;I\,\!&amp;lt;/math&amp;gt; is the identity matrix. Since there are 12 data points in all, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; is 11. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{T}})=11\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 306.6667-232.1667 \\ &lt;br /&gt;
= &amp;amp; 74.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 11-2 \\ &lt;br /&gt;
= &amp;amp; 9  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic can now be calculated using the equation given in [[One_Factor_Designs#Hypothesis_Test_in_Single_Factor_Experiments|Hypothesis Test in Single Factor Experiments]] as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{0}}= &amp;amp; \frac{M{{S}_{TR}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{TR}}/dof(S{{S}_{TR}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{232.1667/2}{74.5/9} \\ &lt;br /&gt;
= &amp;amp; 14.0235  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value for the statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 9 degrees of freedom in the denominator is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{f}_{0}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.9983 \\ &lt;br /&gt;
= &amp;amp; 0.0017  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that change in the lathe speed has a significant effect on the surface finish. The Weibull++ DOE folio displays these results in the ANOVA table, as shown in the figure below. The values of S and R-sq are the standard error and the coefficient of determination for the model, respectively. These values are explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] and indicate how well the model fits the data. The values in the figure below indicate that the fit of the ANOVA model is fair.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_2.png|center|748px|ANOVA table for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the &#039;&#039;i&#039;&#039;&amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; Treatment Mean===&lt;br /&gt;
&lt;br /&gt;
The response at each treatment of a single factor experiment can be assumed to be a normal population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt; provided that the error terms can be assumed to be normally distributed. A point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is the average response at each treatment, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;. Since this is a sample average, the associated variance is &amp;lt;math&amp;gt;{{\sigma }^{2}}/{{m}_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;{{m}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of replicates at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment. Therefore, the confidence interval on &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution. Recall from [[Statistical_Background_on_DOE| Statistical Background on DOE]] (inference on population mean when variance is unknown) that: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}/{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{M{{S}_{E}}/{{m}_{i}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
has a &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with degrees of freedom &amp;lt;math&amp;gt;=dof(S{{S}_{E}})\,\!&amp;lt;/math&amp;gt;. Therefore, a 100 (&amp;lt;math&amp;gt;1-\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment mean, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, for the first treatment of the lathe speed we have:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}= &amp;amp; {{{\bar{y}}}_{1\cdot }} \\ &lt;br /&gt;
= &amp;amp; \frac{6+13+7+8}{4} \\ &lt;br /&gt;
= &amp;amp; 8.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the DOE folio, this value is displayed as the Estimated Mean for the first level, as shown in the Data Summary table in the figure below. The value displayed as the standard deviation for this level is simply the sample standard deviation calculated using the observations corresponding to this level.  The 90% confidence interval for this treatment is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{M{{S}_{E}}}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833\sqrt{\frac{(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833(1.44) \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 2.64  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}\,\!&amp;lt;/math&amp;gt; are 5.9 and 11.1, respectively. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_3.png|center|594px|Data Summary table for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the Difference in Two Treatment Means===&lt;br /&gt;
&lt;br /&gt;
The confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is used to compare two levels of the factor at a given significance. If the confidence interval does not include the value of zero, it is concluded that the two levels of the factor are significantly different. The point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt;. The variance for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})= &amp;amp; var({{{\bar{y}}}_{i\cdot }})+var({{{\bar{y}}}_{j\cdot }}) \\ &lt;br /&gt;
= &amp;amp; {{\sigma }^{2}}/{{m}_{i}}+{{\sigma }^{2}}/{{m}_{j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For balanced designs all &amp;lt;math&amp;gt;{{m}_{i}}=m\,\!&amp;lt;/math&amp;gt;. Therefore:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})=2{{\sigma }^{2}}/m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The standard deviation for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; can be obtained by taking the square root of &amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})\,\!&amp;lt;/math&amp;gt; and is referred to as the pooled standard error:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for the difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2{{{\hat{\sigma }}}^{2}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2M{{S}_{E}}/m}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then a 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, an estimate of the difference in the first and second treatment means of the lathe speed, &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}-{{{\hat{\mu }}}_{2}}= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }} \\ &lt;br /&gt;
= &amp;amp; 8.5-13.25 \\ &lt;br /&gt;
= &amp;amp; -4.75  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The pooled standard error for this difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2M{{S}_{E}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 2.0344  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To test &amp;lt;math&amp;gt;{{H}_{0}}:{{\mu }_{1}}-{{\mu }_{2}}=0\,\!&amp;lt;/math&amp;gt;, the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}-({{\mu }_{1}}-{{\mu }_{2}})}{\sqrt{2M{{S}_{E}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75-(0)}{\sqrt{\tfrac{2(74.5/9)}{4}}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75}{2.0344} \\ &lt;br /&gt;
= &amp;amp; -2.3348  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the DOE folio, the value of the statistic is displayed in the Mean Comparisons table under the column T Value as shown in the figure below. The 90% confidence interval on the difference &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833\sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833(2.0344) \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 3.729  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hence the 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-8.479\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1.021\,\!&amp;lt;/math&amp;gt;, respectively. These values are displayed under the Low CI and High CI columns in the following figure. Since the confidence interval for this pair of means does not included zero, it can be concluded that these means are significantly different at 90% confidence. This conclusion can also be arrived at using the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value noting that the hypothesis is two-sided. The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to the statistic &amp;lt;math&amp;gt;{{t}_{0}}=-2.3348\,\!&amp;lt;/math&amp;gt;, based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with 9 degrees of freedom is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 2\times (1-P(T\le |{{t}_{0}}|)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-P(T\le 2.3348)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-0.9778) \\ &lt;br /&gt;
= &amp;amp; 0.0444  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, the means are significantly different at 90% confidence. Bounds on the difference between other treatment pairs can be obtained in a similar manner and it is concluded that all treatments are significantly different.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_4.png|center|772px|Mean Comparisons table for the data in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
&lt;br /&gt;
Plots of residuals, &amp;lt;math&amp;gt;{{e}_{ij}}\,\!&amp;lt;/math&amp;gt;, similar to the ones discussed in the previous chapters on regression, are used to ensure that the assumptions associated with the ANOVA model are not violated.  The ANOVA model assumes that the random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are normally and independently distributed with the same variance for each treatment. The normality assumption can be checked by obtaining a normal probability plot of the residuals. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equality of variance is checked by plotting residuals against the treatments and the treatment averages, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;  (also referred to as fitted values), and inspecting the spread in the residuals. If a pattern is seen in these plots, then this indicates the need to use a suitable transformation on the response that will ensure variance equality. Box-Cox transformations are discussed in the next section. To check for independence of the random error terms residuals are plotted against time or run-order to ensure that a pattern does not exist in these plots. Residual plots for the given example are shown in the following two figures. The plots show that the assumptions associated with the ANOVA model are not violated.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_5.png|center|650px|Normal probability plot of residuals for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6_6.png|center|650px|Plot of residuals against fitted values for the single factor experiment in the first table.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Box-Cox Method===&lt;br /&gt;
&lt;br /&gt;
Transformations on the response may be used when residual plots for an experiment show a pattern. This indicates that the equality of variance does not hold for the residuals of the given model. The Box-Cox method can be used to automatically identify a suitable power transformation for the data based on the relation:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{*}}={{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is determined using the given data such that &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is minimized. The values of &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; are not used as is because of issues related to calculation or comparison of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values for different values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. For example, for &amp;lt;math&amp;gt;\lambda =0\,\!&amp;lt;/math&amp;gt; all response values will become 1. Therefore, the following relation is used to obtain &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y^{\lambda }=\left\{ &lt;br /&gt;
\begin{array}{cc}&lt;br /&gt;
\frac{y^{\lambda }-1}{\lambda \dot{y}^{\lambda -1}} &amp;amp; \lambda \neq 0 \\ &lt;br /&gt;
\dot{y}\ln y &amp;amp; \lambda =0&lt;br /&gt;
\end{array}&lt;br /&gt;
\right.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\dot{y}={{\ln }^{-1}}[(1/n)\mathop{}_{}^{}\ln y]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Once all &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; values are obtained for a value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, the corresponding &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for these values is obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;. The process is repeated for a number of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values to obtain a plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is selected as the required transformation for the given data. The DOE folio plots &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values because the range of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values is large and if this is not done, all values cannot be displayed on the same plot. The range of search for the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value in the software is from &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt;, because larger values of of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are usually not meaningful. The DOE folio also displays a recommended transformation based on the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value obtained as per the second table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.2.png|center|433px|Recommended Box-Cox power transformations.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Confidence intervals on the selected &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values are also available. Let &amp;lt;math&amp;gt;S{{S}_{E}}(\lambda )\,\!&amp;lt;/math&amp;gt; be the value of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; corresponding to the selected value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then, to calculate the 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence intervals on &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, we need to calculate &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}^{*}}=S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The required limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are the two values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the value &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; (on the plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;). If the limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; do not include the value of one, then the transformation is applicable for the given data.&lt;br /&gt;
Note that the power transformations are not defined for response values that are negative or zero. The DOE folio deals with negative and zero response values using the following equations (that involve addition of a suitable quantity to all of the response values if a zero or negative response value is encountered). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 y(i)&amp;amp; =  y(i)+\left| {{y}_{\min }} \right|\times 1.1 &amp;amp; \text{Negative Response} \\ &lt;br /&gt;
 y(i)&amp;amp; =  y(i)+1                                      &amp;amp; \text{Zero Response}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;math&amp;gt;{{y}_{\min }}\,\!&amp;lt;/math&amp;gt; represents the minimum response value and &amp;lt;math&amp;gt;\left| {{y}_{\min }} \right|\,\!&amp;lt;/math&amp;gt; represents the absolute value of the minimum response. &lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
To illustrate the Box-Cox method, consider the experiment given in the first table. Transformed response values for various values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be calculated using the equation for &amp;lt;math&amp;gt;{Y}^{\lambda}\,\!&amp;lt;/math&amp;gt; given in [[One_Factor_Designs#Box-Cox_Method|Box-Cox Method]]. Knowing the hat matrix, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values corresponding to each of these &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values can easily be obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values calculated for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values between &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt; for the given data are shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \lambda  &amp;amp; {} &amp;amp; S{{S}_{E}} &amp;amp; \ln S{{S}_{E}}  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   -5 &amp;amp; {} &amp;amp; 5947.8 &amp;amp; 8.6908  \\&lt;br /&gt;
   -4 &amp;amp; {} &amp;amp; 1946.4 &amp;amp; 7.5737  \\&lt;br /&gt;
   -3 &amp;amp; {} &amp;amp; 696.5 &amp;amp; 6.5461  \\&lt;br /&gt;
   -2 &amp;amp; {} &amp;amp; 282.2 &amp;amp; 5.6425  \\&lt;br /&gt;
   -1 &amp;amp; {} &amp;amp; 135.8 &amp;amp; 4.9114  \\&lt;br /&gt;
   0 &amp;amp; {} &amp;amp; 83.9 &amp;amp; 4.4299  \\&lt;br /&gt;
   1 &amp;amp; {} &amp;amp; 74.5 &amp;amp; 4.3108  \\&lt;br /&gt;
   2 &amp;amp; {} &amp;amp; 101.0 &amp;amp; 4.6154  \\&lt;br /&gt;
   3 &amp;amp; {} &amp;amp; 190.4 &amp;amp; 5.2491  \\&lt;br /&gt;
   4 &amp;amp; {} &amp;amp; 429.5 &amp;amp; 6.0627  \\&lt;br /&gt;
   5 &amp;amp; {} &amp;amp; 1057.6 &amp;amp; -6.9638  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A plot of &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for various &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values, as obtained from the DOE folio, is shown in the following figure. The value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; that gives the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is identified as 0.7841. The &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; value corresponding to this value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is 73.74. A 90% confidence interval on this &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value is calculated as follows. &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; can be obtained as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}^{*}}= &amp;amp; S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{t_{0.05,9}^{2}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{{{1.833}^{2}}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 101.27  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\ln S{{S}^{*}}=4.6178\,\!&amp;lt;/math&amp;gt;. The &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values corresponding to this value from the following figure are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0054\,\!&amp;lt;/math&amp;gt;. Therefore, the 90% confidence limits on are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0054\,\!&amp;lt;/math&amp;gt;. Since the confidence limits include the value of 1, this indicates that a transformation is not required for the data in the first table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:doe6_7.png|center|650px|Box-Cox power transformation plot for the data in the first table.|link=]]&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ANOVA_for_Designed_Experiments&amp;diff=65261</id>
		<title>ANOVA for Designed Experiments</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ANOVA_for_Designed_Experiments&amp;diff=65261"/>
		<updated>2017-08-26T03:29:28Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Box-Cox Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Doebook|5}}&lt;br /&gt;
In [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], methods were presented to model the relationship between a response and the associated factors (referred to as predictor variables in the context of regression) based on an observed data set. Such studies, where observed values of the response are used to establish an association between the response and the factors, are called &#039;&#039;observational studies&#039;&#039;. However, in the case of observational studies, it is difficult to establish a cause-and-effect relationship between the observed factors and the response. This is because a number of alternative justifications can be used to explain the observed change in the response values. For example, a regression model fitted to data on the population of cities and road accidents might show a positive regression relation. However, this relation does not imply that an increase in a city&#039;s population causes an increase in road accidents. It could be that a number of other factors such as road conditions, traffic control and the degree to which the residents of the city follow the traffic rules affect the number of road accidents in the city and the increase in the number of accidents seen in the study is caused by these factors. Since the observational study does not take the effect of these factors into account, the assumption that an increase in a city&#039;s population will lead to an increase in road accidents is not a valid one. For example, the population of a city may increase but road accidents in the city may decrease because of better traffic control. To establish a cause-and-effect relationship, the study should be conducted in such a way that the effect of all other factors is excluded from the investigation.&lt;br /&gt;
&lt;br /&gt;
The studies that enable the establishment of a cause-and-effect relationship are called &#039;&#039;experiments&#039;&#039;. In experiments the response is investigated by studying only the effect of the factor(s) of interest and excluding all other effects that may provide alternative justifications to the observed change in response. This is done in two ways. First, the levels of the factors to be investigated are carefully selected and then strictly controlled during the execution of the experiment. The aspect of selecting what factor levels should be investigated in the experiment is called the &#039;&#039;design&#039;&#039; of the experiment. The second distinguishing feature of experiments is that observations in an experiment are recorded in a random order. By doing this, it is hoped that the effect of all other factors not being investigated in the experiment will get cancelled out so that the change in the response is the result of only the investigated factors. Using these two techniques, experiments tend to ensure that alternative justifications to observed changes in the response are voided, thereby enabling the establishment of a cause-and-effect relationship between the response and the investigated factors.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Randomization&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The aspect of recording observations in an experiment in a random order is referred to as &#039;&#039;randomization&#039;&#039;. Specifically, randomization is the process of assigning the various levels of the investigated factors to the experimental units in a random fashion.  An experiment is said to be &#039;&#039;completely randomized&#039;&#039; if the probability of an experimental unit to be subjected to any level of a factor is equal for all the experimental units. The importance of randomization can be illustrated using an example. Consider an experiment where the effect of the speed of a lathe machine on the surface finish of a product is being investigated. In order to save time, the experimenter records surface finish values by running the lathe machine continuously and recording observations in the order of increasing speeds. The analysis of the experiment data shows that an increase in lathe speeds causes a decrease in the quality of surface finish. However the results of the experiment are disputed by the lathe operator who claims that he has been able to obtain better surface finish quality in the products by operating the lathe machine at higher speeds. It is later found that the faulty results were caused because of overheating of the tool used in the machine. Since the lathe was run continuously in the order of increased speeds the observations were recorded in the order of increased tool temperatures. This problem could have been avoided if the experimenter had randomized the experiment and taken reading at the various lathe speeds in a random fashion. This would require the experimenter to stop and restart the machine at every observation, thereby keeping the temperature of the tool within a reasonable range. Randomization would have ensured that the effect of heating of the machine tool is not included in the experiment.&lt;br /&gt;
&lt;br /&gt;
==Analysis of Single Factor Experiments==&lt;br /&gt;
&lt;br /&gt;
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], the analysis of observational studies involves the use of regression models. The analysis of experimental studies involves the use of analysis of variance (ANOVA) models. For a comparison of the two models see [[ANOVA_for_Designed_Experiments#Fitting_ANOVA_Models|Fitting ANOVA Models]]. In single factor experiments, ANOVA models are used to compare the mean response values at different levels of the factor. Each level of the factor is investigated to see if the response is significantly different from the response at other levels of the factor.  The analysis of single factor experiments is often referred to as &#039;&#039;one-way ANOVA&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To illustrate the use of ANOVA models in the analysis of experiments, consider a single factor experiment where the analyst wants to see if the surface finish of certain parts is affected by the speed of a lathe machine. Data is collected for three speeds (or three treatments). Each treatment is replicated four times. Therefore, this experiment design is balanced. Surface finish values recorded using randomization are shown in the following table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.1.png|thumb|center|400px|Surface finish values for three speeds of a lathe machine.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be stated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}={{\mu }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model assumes that the response at each factor level, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, is the sum of the mean response at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, and a random error term, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. The subscript &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; denotes the factor level while the subscript &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; denotes the replicate. If there are &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of the factor and &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates at each level then &amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j=1,2,...,m\,\!&amp;lt;/math&amp;gt;. The random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are assumed to be normally and independently distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. Therefore, the response at each level can be thought of as a normally distributed population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and constant variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. The equation given above is referred to as the &#039;&#039;means model&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The ANOVA model of the means model can also be written using &amp;lt;math&amp;gt;{{\mu }_{i}}=\mu +{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the effect due to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Such an ANOVA model is called the &#039;&#039;effects model&#039;&#039;.  In the effects models the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, represent the deviations from the overall mean, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;. Therefore, the following constraint exists on the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;s: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Fitting ANOVA Models===&lt;br /&gt;
&lt;br /&gt;
To fit ANOVA models and carry out hypothesis testing in single factor experiments, it is convenient to express the effects model of the effects model in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt; (that was used for multiple linear regression models in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This can be done as shown next. Using the effects model, the ANOVA model for the single factor experiment in the first table can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment effect. There are three treatments in the first table (500, 600 and 700). Therefore, there are three treatment effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{3}}\,\!&amp;lt;/math&amp;gt;. The following constraint exists for these effects:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   } {{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the first treatment, the ANOVA model for the single factor experiment in the above table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{1j}}=\mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+0\cdot {{\tau }_{3}}+{{\epsilon }_{1j}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;, the model for the first treatment is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}-0\cdot ({{\tau }_{1}}+{{\tau }_{2}})+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{or   }{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Models for the second and third treatments can be obtained in a similar way. The models for the three treatments are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{First Treatment}: &amp;amp; {{Y}_{1j}}=1\cdot \mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{Second Treatment}: &amp;amp; {{Y}_{2j}}=1\cdot \mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{2j}} \\ &lt;br /&gt;
\text{Third Treatment}: &amp;amp; {{Y}_{3j}}=1\cdot \mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{3j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coefficients of the treatment effects &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; can be expressed using two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the indicator variables &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, the ANOVA model for the data in the first table now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{x}_{1}}\cdot {{\tau }_{1}}+{{x}_{2}}\cdot {{\tau }_{2}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation can be rewritten by including subscripts &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; (for the level of the factor) and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; (for the replicate number) as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{x}_{i1}}\cdot {{\tau }_{1}}+{{x}_{i2}}\cdot {{\tau }_{2}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation given above represents the &amp;quot;regression version&amp;quot; of the ANOVA model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Treat Numerical Factors as Qualitative or Quantitative ?===&lt;br /&gt;
&lt;br /&gt;
It can be seen from the equation given above that in an ANOVA model each factor is treated as a qualitative factor. In the present example the factor, lathe speed, is a quantitative factor with three levels. But the ANOVA model treats this factor as a qualitative factor with three levels. Therefore, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required to represent this factor. &lt;br /&gt;
&lt;br /&gt;
Note that in a regression model a variable can either be treated as a quantitative or a qualitative variable. The factor, lathe speed, would be used as a quantitative factor and represented with a single predictor variable in a regression model. For example, if a first order model were to be fitted to the data in the first table, then the regression model would take the form &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. If a second order regression model were to be fitted, the regression model would be &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}x_{i1}^{2}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. Notice that unlike these regression models, the regression version of the ANOVA model does not make any assumption about the nature of relationship between the response and the factor being investigated.&lt;br /&gt;
&lt;br /&gt;
The choice of treating a particular factor as a quantitative or qualitative variable depends on the objective of the experimenter. In the case of the data of the first table, the objective of the experimenter is to compare the levels of the factor to see if change in the levels leads to a significant change in the response.  The objective is not to make predictions on the response for a given level of the factor. Therefore, the factor is treated as a qualitative factor in this case. If the objective of the experimenter were prediction or optimization, the experimenter would focus on aspects such as the nature of relationship between the factor, lathe speed, and the response, surface finish, so that the factor should be modeled as a quantitative factor to make accurate predictions.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model in the Form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be expanded for the three treatments and four replicates of the data in the first table as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{11}}= &amp;amp; 6=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{11}}\text{    Level 1, Replicate 1} \\ &lt;br /&gt;
{{Y}_{21}}= &amp;amp; 13=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{21}}\text{  Level 2, Replicate 1} \\ &lt;br /&gt;
{{Y}_{31}}= &amp;amp; 23=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{31}}\text{  Level 3, Replicate 1} \\ &lt;br /&gt;
{{Y}_{12}}= &amp;amp; 13=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{12}}\text{  Level 1, Replicate 2} \\ &lt;br /&gt;
{{Y}_{22}}= &amp;amp; 16=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{22}}\text{  Level 2, Replicate 2} \\ &lt;br /&gt;
{{Y}_{32}}= &amp;amp; 20=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{32}}\text{  Level 3, Replicate 2} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; ... \\ &lt;br /&gt;
{{Y}_{34}}= &amp;amp; 18=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{34}}\text{  Level 3, Replicate 4}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The corresponding matrix notation is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{11}}  \\&lt;br /&gt;
   {{Y}_{21}}  \\&lt;br /&gt;
   {{Y}_{31}}  \\&lt;br /&gt;
   {{Y}_{12}}  \\&lt;br /&gt;
   {{Y}_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{34}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y= &amp;amp; X\beta +\epsilon  \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   23  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   16  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The matrices &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are used in the calculation of the sum of squares in the next section. The data in the first table can be entered into DOE++ as shown in the figure below.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.1.png|thumb|center|550px|Single factor experiment design for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Test in Single Factor Experiments===&lt;br /&gt;
&lt;br /&gt;
The hypothesis test in single factor experiments examines the ANOVA model to see if the response at any level of the investigated factor is significantly different from that at the other levels. If this is not the case and the response at all levels is not significantly different, then it can be concluded that the investigated factor does not affect the response. The test on the ANOVA model is carried out by checking to see if any of the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, are non-zero. The test is similar to the test of significance of regression mentioned in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] in the context of regression models. The hypotheses statements for this test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0 \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\tau }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test for &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is carried out using the following statistic:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{TR}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the ANOVA model and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. Note that in the case of ANOVA models we use the notation &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment mean square) for the model mean square and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment sum of squares) for the model sum of squares (instead of &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression mean square, and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression sum of squares, used in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This is done to indicate that the model under consideration is the ANOVA model and not the regression model. The calculations to obtain &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; are identical to the calculations to obtain &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt; explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The sum of squares to obtain the statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt; can be calculated as explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. Using the data in the first table, the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.1667 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.1667 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.1667  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 232.1667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the number of levels of the factor, &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; represents the replicates at each level, &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; represents the vector of the response values, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; represents the matrix of ones. (For details on each of these terms, refer to [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].)&lt;br /&gt;
Since two effect terms, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are used in the regression version of the ANOVA model, the degrees of freedom associated with the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, is two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be obtained as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.9167 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.9167 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.9167  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 306.6667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;I\,\!&amp;lt;/math&amp;gt; is the identity matrix. Since there are 12 data points in all, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; is 11. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{T}})=11\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 306.6667-232.1667 \\ &lt;br /&gt;
= &amp;amp; 74.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 11-2 \\ &lt;br /&gt;
= &amp;amp; 9  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic can now be calculated using the equation given in [[ANOVA_for_Designed_Experiments#Hypothesis_Test_in_Single_Factor_Experiments|Hypothesis Test in Single Factor Experiments]] as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{0}}= &amp;amp; \frac{M{{S}_{TR}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{TR}}/dof(S{{S}_{TR}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{232.1667/2}{74.5/9} \\ &lt;br /&gt;
= &amp;amp; 14.0235  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value for the statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 9 degrees of freedom in the denominator is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{f}_{0}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.9983 \\ &lt;br /&gt;
= &amp;amp; 0.0017  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that change in the lathe speed has a significant effect on the surface finish. DOE++ displays these results in the ANOVA table, as shown in the figure below. The values of S and R-sq are the standard error and the coefficient of determination for the model, respectively. These values are explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] and indicate how well the model fits the data. The values in the figure below indicate that the fit of the ANOVA model is fair.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.2.png|thumb|center|650px|ANOVA table for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the &#039;&#039;i&#039;&#039;&amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; Treatment Mean===&lt;br /&gt;
&lt;br /&gt;
The response at each treatment of a single factor experiment can be assumed to be a normal population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt; provided that the error terms can be assumed to be normally distributed. A point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is the average response at each treatment, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;. Since this is a sample average, the associated variance is &amp;lt;math&amp;gt;{{\sigma }^{2}}/{{m}_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;{{m}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of replicates at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment. Therefore, the confidence interval on &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution. Recall from [[Statistical_Background_on_DOE| Statistical Background on DOE]] (inference on population mean when variance is unknown) that: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}/{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{M{{S}_{E}}/{{m}_{i}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
has a &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with degrees of freedom &amp;lt;math&amp;gt;=dof(S{{S}_{E}})\,\!&amp;lt;/math&amp;gt;. Therefore, a 100 (&amp;lt;math&amp;gt;1-\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment mean, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, for the first treatment of the lathe speed we have:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}= &amp;amp; {{{\bar{y}}}_{1\cdot }} \\ &lt;br /&gt;
= &amp;amp; \frac{6+13+7+8}{4} \\ &lt;br /&gt;
= &amp;amp; 8.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In DOE++, this value is displayed as the Estimated Mean for the first level, as shown in the Data Summary table in the figure below. The value displayed as the standard deviation for this level is simply the sample standard deviation calculated using the observations corresponding to this level.  The 90% confidence interval for this treatment is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{M{{S}_{E}}}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833\sqrt{\frac{(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833(1.44) \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 2.64  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}\,\!&amp;lt;/math&amp;gt; are 5.9 and 11.1, respectively. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.3.png|thumb|center|650px|Data Summary table for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the Difference in Two Treatment Means===&lt;br /&gt;
&lt;br /&gt;
The confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is used to compare two levels of the factor at a given significance. If the confidence interval does not include the value of zero, it is concluded that the two levels of the factor are significantly different. The point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt;. The variance for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})= &amp;amp; var({{{\bar{y}}}_{i\cdot }})+var({{{\bar{y}}}_{j\cdot }}) \\ &lt;br /&gt;
= &amp;amp; {{\sigma }^{2}}/{{m}_{i}}+{{\sigma }^{2}}/{{m}_{j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For balanced designs all &amp;lt;math&amp;gt;{{m}_{i}}=m\,\!&amp;lt;/math&amp;gt;. Therefore:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})=2{{\sigma }^{2}}/m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The standard deviation for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; can be obtained by taking the square root of &amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})\,\!&amp;lt;/math&amp;gt; and is referred to as the pooled standard error:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for the difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2{{{\hat{\sigma }}}^{2}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2M{{S}_{E}}/m}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then a 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, an estimate of the difference in the first and second treatment means of the lathe speed, &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}-{{{\hat{\mu }}}_{2}}= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }} \\ &lt;br /&gt;
= &amp;amp; 8.5-13.25 \\ &lt;br /&gt;
= &amp;amp; -4.75  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The pooled standard error for this difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2M{{S}_{E}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 2.0344  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To test &amp;lt;math&amp;gt;{{H}_{0}}:{{\mu }_{1}}-{{\mu }_{2}}=0\,\!&amp;lt;/math&amp;gt;, the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}-({{\mu }_{1}}-{{\mu }_{2}})}{\sqrt{2M{{S}_{E}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75-(0)}{\sqrt{\tfrac{2(74.5/9)}{4}}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75}{2.0344} \\ &lt;br /&gt;
= &amp;amp; -2.3348  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In DOE++, the value of the statistic is displayed in the Mean Comparisons table under the column T Value as shown in the figure below. The 90% confidence interval on the difference &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833\sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833(2.0344) \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 3.729  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hence the 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-8.479\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1.021\,\!&amp;lt;/math&amp;gt;, respectively. These values are displayed under the Low CI and High CI columns in the following figure. Since the confidence interval for this pair of means does not included zero, it can be concluded that these means are significantly different at 90% confidence. This conclusion can also be arrived at using the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value noting that the hypothesis is two-sided. The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to the statistic &amp;lt;math&amp;gt;{{t}_{0}}=-2.3348\,\!&amp;lt;/math&amp;gt;, based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with 9 degrees of freedom is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 2\times (1-P(T\le |{{t}_{0}}|)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-P(T\le 2.3348)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-0.9778) \\ &lt;br /&gt;
= &amp;amp; 0.0444  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, the means are significantly different at 90% confidence. Bounds on the difference between other treatment pairs can be obtained in a similar manner and it is concluded that all treatments are significantly different.&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
&lt;br /&gt;
Plots of residuals, &amp;lt;math&amp;gt;{{e}_{ij}}\,\!&amp;lt;/math&amp;gt;, similar to the ones discussed in the previous chapters on regression, are used to ensure that the assumptions associated with the ANOVA model are not violated.  The ANOVA model assumes that the random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are normally and independently distributed with the same variance for each treatment. The normality assumption can be checked by obtaining a normal probability plot of the residuals. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.4.png|thumb|center|644px|Mean Comparisons table for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equality of variance is checked by plotting residuals against the treatments and the treatment averages, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;  (also referred to as fitted values), and inspecting the spread in the residuals. If a pattern is seen in these plots, then this indicates the need to use a suitable transformation on the response that will ensure variance equality. Box-Cox transformations are discussed in the next section. To check for independence of the random error terms residuals are plotted against time or run-order to ensure that a pattern does not exist in these plots. Residual plots for the given example are shown in the following two figures. The plots show that the assumptions associated with the ANOVA model are not violated.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.5.png|thumb|center|550px|Normal probability plot of residuals for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.6.png|thumb|center|550px|Plot of residuals against fitted values for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Box-Cox Method===&lt;br /&gt;
&lt;br /&gt;
Transformations on the response may be used when residual plots for an experiment show a pattern. This indicates that the equality of variance does not hold for the residuals of the given model. The Box-Cox method can be used to automatically identify a suitable power transformation for the data based on the following relationship:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{*}}={{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is determined using the given data such that &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is minimized. The values of &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; are not used as is because of issues related to calculation or comparison of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values for different values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. For example, for &amp;lt;math&amp;gt;\lambda =0\,\!&amp;lt;/math&amp;gt; all response values will become 1. Therefore, the following relationship is used to obtain &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y^{\lambda }=\left\{ &lt;br /&gt;
\begin{array}{cc}&lt;br /&gt;
\frac{y^{\lambda }-1}{\lambda \dot{y}^{\lambda -1}} &amp;amp; \lambda \neq 0 \\ &lt;br /&gt;
\dot{y}\ln y &amp;amp; \lambda =0&lt;br /&gt;
\end{array}&lt;br /&gt;
\right.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\dot{y}=\exp \left[ \frac{1}{n}\sum\limits_{i=1}^{n}y_{i}\right]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Once all &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; values are obtained for a value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, the corresponding &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for these values is obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;. The process is repeated for a number of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values to obtain a plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is selected as the required transformation for the given data. DOE++ plots &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values because the range of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values is large and if this is not done, all values cannot be displayed on the same plot. The range of search for the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value in the software is from &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt;, because larger values of of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are usually not meaningful. DOE++ also displays a recommended transformation based on the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value obtained as shown in the table below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=&amp;quot;2&amp;quot; cellpadding=&amp;quot;5&amp;quot; align=&amp;quot;center&amp;quot; &amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;font size=&amp;quot;3&amp;quot;&amp;gt;Best Lambda&amp;lt;/font&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;font size=&amp;quot;3&amp;quot;&amp;gt;Recommended Transformation&amp;lt;/font&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;font size=&amp;quot;3&amp;quot;&amp;gt;Equation&amp;lt;/font&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;-2.5&amp;lt;\lambda \leq -1.5\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Power} \\ \lambda =-2\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\frac{1}{Y^{2}}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;-1.5&amp;lt;\lambda \leq -0.75\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Reciprocal} \\ \lambda =-1\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\frac{1}{Y}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;-0.75&amp;lt;\lambda \leq -0.25\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Reciprocal Square Root} \\ \lambda =-0.5\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\frac{1}{\sqrt{Y}}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;-0.25&amp;lt;\lambda \leq 0.25\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Natural Log} \\ \lambda =0\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\ln Y\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;0.25&amp;lt;\lambda \leq 0.75\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Square Root} \\ \lambda =0.5\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\sqrt{Y}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;0.75&amp;lt;\lambda \leq 1.5\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{None} \\ \lambda =1\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=Y\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;1.5&amp;lt;\lambda \leq 2.5\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Power} \\ \lambda =2\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=Y^{2}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Confidence intervals on the selected &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values are also available. Let &amp;lt;math&amp;gt;S{{S}_{E}}(\lambda )\,\!&amp;lt;/math&amp;gt; be the value of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; corresponding to the selected value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then, to calculate the 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence intervals on &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, we need to calculate &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}^{*}}=S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The required limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are the two values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the value &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; (on the plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;). If the limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; do not include the value of one, then the transformation is applicable for the given data.&lt;br /&gt;
Note that the power transformations are not defined for response values that are negative or zero. DOE++ deals with negative and zero response values using the following equations (that involve addition of a suitable quantity to all of the response values if a zero or negative response value is encountered). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{rll}&lt;br /&gt;
y\left( i\right) =&amp;amp; y\left( i\right) +\left| y_{\min }\right|\times 1.1 &amp;amp; \text{Negative Response} \\ &lt;br /&gt;
y\left( i\right) =&amp;amp; y\left( i\right) +1 &amp;amp; \text{Zero Response}&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;math&amp;gt;{{y}_{\min }}\,\!&amp;lt;/math&amp;gt; represents the minimum response value and &amp;lt;math&amp;gt;\left| {{y}_{\min }} \right|\,\!&amp;lt;/math&amp;gt; represents the absolute value of the minimum response. &lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
To illustrate the Box-Cox method, consider the experiment given in the first table. Transformed response values for various values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be calculated using the equation for &amp;lt;math&amp;gt;{Y}^{\lambda}\,\!&amp;lt;/math&amp;gt; given in [[ANOVA_for_Designed_Experiments#Box-Cox_Method|Box-Cox Method]]. Knowing the hat matrix, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values corresponding to each of these &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values can easily be obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values calculated for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values between &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt; for the given data are shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \lambda  &amp;amp; {} &amp;amp; S{{S}_{E}} &amp;amp; \ln S{{S}_{E}}  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   -5 &amp;amp; {} &amp;amp; 5947.8 &amp;amp; 8.6908  \\&lt;br /&gt;
   -4 &amp;amp; {} &amp;amp; 1946.4 &amp;amp; 7.5737  \\&lt;br /&gt;
   -3 &amp;amp; {} &amp;amp; 696.5 &amp;amp; 6.5461  \\&lt;br /&gt;
   -2 &amp;amp; {} &amp;amp; 282.2 &amp;amp; 5.6425  \\&lt;br /&gt;
   -1 &amp;amp; {} &amp;amp; 135.8 &amp;amp; 4.9114  \\&lt;br /&gt;
   0 &amp;amp; {} &amp;amp; 83.9 &amp;amp; 4.4299  \\&lt;br /&gt;
   1 &amp;amp; {} &amp;amp; 74.5 &amp;amp; 4.3108  \\&lt;br /&gt;
   2 &amp;amp; {} &amp;amp; 101.0 &amp;amp; 4.6154  \\&lt;br /&gt;
   3 &amp;amp; {} &amp;amp; 190.4 &amp;amp; 5.2491  \\&lt;br /&gt;
   4 &amp;amp; {} &amp;amp; 429.5 &amp;amp; 6.0627  \\&lt;br /&gt;
   5 &amp;amp; {} &amp;amp; 1057.6 &amp;amp; -6.9638  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A plot of &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for various &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values, as obtained from DOE++, is shown in the following figure. The value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; that gives the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is identified as 0.7841. The &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; value corresponding to this value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is 73.74. A 90% confidence interval on this &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value is calculated as follows. &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; can be obtained as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}^{*}}= &amp;amp; S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{t_{0.05,9}^{2}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{{{1.833}^{2}}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 101.27  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\ln S{{S}^{*}}=4.6178\,\!&amp;lt;/math&amp;gt;. The &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values corresponding to this value from the following figure are &amp;lt;math&amp;gt;-0.4686\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0052\,\!&amp;lt;/math&amp;gt;. Therefore, the 90% confidence limits on are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0055\,\!&amp;lt;/math&amp;gt;. Since the confidence limits include the value of 1, this indicates that a transformation is not required for the data in the first table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:doe6.7.png|thumb|center|400px|Box-Cox power transformation plot for the data in the first table.]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Experiments with Several Factors - Factorial Experiments==&lt;br /&gt;
&lt;br /&gt;
Experiments with two or more factors are encountered frequently. The best way to carry out such experiments is by using  factorial experiments.  Factorial experiments are experiments in which all combinations of factors are investigated in each replicate of the experiment. Factorial experiments are the only means to completely and systematically study interactions between factors in addition to identifying significant factors.  One-factor-at-a-time experiments (where each factor is investigated separately by keeping all the remaining factors constant) do not reveal the interaction effects between the factors. Further, in one-factor-at-a-time experiments full randomization is not possible.&lt;br /&gt;
&lt;br /&gt;
To illustrate factorial experiments consider an experiment where the response is investigated for two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. Assume that the response is studied at two levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;{{A}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; representing the lower level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{A}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; representing the higher level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;. Similarly, let &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; represent the two levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; that are being investigated in this experiment. Since there are two factors with two levels, a total of &amp;lt;math&amp;gt;2\times 2=4\,\!&amp;lt;/math&amp;gt; combinations exist (&amp;lt;math&amp;gt;{{A}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; - &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt;,    - &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{A}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; - &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt;,    - &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt;). Thus, four runs are required for each replicate if a factorial experiment is to be carried out in this case. Assume that the response values for each of these four possible combinations are obtained as shown in the third table.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.3.png|thumb|center|400px|Two-factor factorial experiment.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.8.png|thumb|center|400px|Interaction plot for the data in the third table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Investigating Factor Effects===&lt;br /&gt;
&lt;br /&gt;
The effect of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response can be obtained by taking the difference between the average response when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is high and the average response when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is low. The change in the response due to a change in the level of a factor is called the main effect of the factor. The main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; as per the response values in the third table is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{45+55}{2}-\frac{25+35}{2} \\ &lt;br /&gt;
= &amp;amp; 50-30 \\ &lt;br /&gt;
= &amp;amp; 20  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from the lower level to the higher level, the response increases by 20 units. A plot of the response for the two levels of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at different levels of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is shown in the figure above. The plot shows that change in the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; leads to an increase in the response by 20 units regardless of the level of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. Therefore, no interaction exists in this case as indicated by the parallel lines on the plot. The main effect of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be obtained as: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
B= &amp;amp; Average\text{ }response\text{ }at\text{ }{{B}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{B}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{35+55}{2}-\frac{25+45}{2} \\ &lt;br /&gt;
= &amp;amp; 45-35 \\ &lt;br /&gt;
= &amp;amp; 10  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Investigating Interactions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now assume that the response values for each of the four treatment combinations were obtained as shown in the fourth table. The main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; in this case is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{40+10}{2}-\frac{20+30}{2} \\ &lt;br /&gt;
= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.4.png|thumb|center|400px|Two factor factorial experiment.]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
It appears that &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; does not have an effect on the response. However, a plot of the response of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at different levels of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; shows that the response does change with the levels of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; but the effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response is dependent on the level of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (see the figure below). Therefore, an interaction between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; exists in this case (as indicated by the non-parallel lines of the figure). The interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be calculated as follows: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.9.png|thumb|center|400px|Interaction plot for the data in the fourth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
AB= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}\text{-}{{B}_{\text{high}}}\text{ }and\text{ }{{A}_{\text{low}}}\text{-}{{B}_{\text{low}}}- \\ &lt;br /&gt;
 &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}}\text{-}{{B}_{\text{high}}}\text{ }and\text{ }{{A}_{\text{high}}}\text{-}{{B}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{10+20}{2}-\frac{40+30}{2} \\ &lt;br /&gt;
= &amp;amp; -20  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that in this case, if a one-factor-at-a-time experiment were used to investigate the effect of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response, it would lead to incorrect conclusions. For example, if the response at factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was studied by holding &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; constant at its lower level, then the main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would be obtained as &amp;lt;math&amp;gt;40-20=20\,\!&amp;lt;/math&amp;gt;, indicating that the response increases by 20 units when the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from low to high. On the other hand, if the response at factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was studied by holding &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; constant at its higher level than the main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would be obtained as &amp;lt;math&amp;gt;10-30=-20\,\!&amp;lt;/math&amp;gt;, indicating that the response decreases by 20 units when the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from low to high.&lt;br /&gt;
&lt;br /&gt;
==Analysis of General Factorial Experiments==&lt;br /&gt;
&lt;br /&gt;
In DOE++, factorial experiments are referred to as &#039;&#039;factorial designs&#039;&#039;. The experiments explained in this section are referred to as g&#039;&#039;eneral factorial designs&#039;&#039;. This is done to distinguish these experiments from the other factorial designs supported by DOE++ (see the figure below). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.10.png|thumb|center|518px|Factorial experiments available in DOE++.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The other designs (such as the two level full factorial designs that are explained in [[Two_Level_Factorial_Experiments| Two Level Factorial Experiments]]) are special cases of these experiments in which factors are limited to a specified number of levels. The ANOVA model for the analysis of factorial experiments is formulated as shown next. Assume a factorial experiment in which the effect of two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, on the response is being investigated. Let there be &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{n}_{b}}\,\!&amp;lt;/math&amp;gt; levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The ANOVA model for this experiment can be stated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\tau }_{i}}+{{\delta }_{j}}+{{(\tau \delta )}_{ij}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean effect&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;j=1,2,...,{{n}_{b}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt; represents the random error terms (which are assumed to be normally distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*and the subscript &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates (&amp;lt;math&amp;gt;k=1,2,...,m\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represent deviations from the overall mean, the following constraints exist: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{j=1}{\overset{{{n}_{b}}}{\mathop \sum }}\,{{\delta }_{j}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{j=1}{\overset{{{n}_{b}}}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Tests in General Factorial Experiments===&lt;br /&gt;
These tests are used to check whether each of the factors investigated in the experiment is significant or not. For the previous example, with two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, and their interaction, &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;, the statements for the hypothesis tests can be formulated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \begin{matrix}&lt;br /&gt;
   1. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0\text{    (Main effect of }A\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\tau }_{i}}\ne 0\text{    for at least one }i \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   2. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\delta }_{1}}={{\delta }_{2}}=...={{\delta }_{{{n}_{b}}}}=0\text{    (Main effect of }B\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\delta }_{j}}\ne 0\text{    for at least one }j \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   3. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{12}}=...={{(\tau \delta )}_{{{n}_{a}}{{n}_{b}}}}=0\text{    (Interaction }AB\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{(\tau \delta )}_{ij}}\ne 0\text{    for at least one }ij  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistics for the three tests are as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::1)&amp;lt;math&amp;gt;{(F_{0})}_{A} = \frac{MS_{A}}{MS_{E}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{A}\,\!&amp;lt;/math&amp;gt; is the mean square due to factor &amp;lt;math&amp;gt;{A}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{MS_E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::2)&amp;lt;math&amp;gt;{(F_{0})_{B}} = \frac{MS_B}{MS_E}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{B}\,\!&amp;lt;/math&amp;gt; is the mean square due to factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;MS_{E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::3)&amp;lt;math&amp;gt;{(F_{0})_{AB}} = \frac{MS_{AB}}{MS_{E}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{AB}\,\!&amp;lt;/math&amp;gt; is the mean square due to interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;MS_{E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The tests are identical to the partial &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; test explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. The sum of squares for these tests (to obtain the mean squares) are calculated by splitting the model sum of squares into the extra sum of squares due to each factor. The extra sum of squares calculated for each of the factors may either be partial or sequential.  For the present example, if the extra sum of squares used is sequential, then the model sum of squares can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{TR}}=S{{S}_{A}}+S{{S}_{B}}+S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{B}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to factor    and &amp;lt;math&amp;gt;S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
The mean squares are obtained by dividing the sum of squares by the associated degrees of freedom. Once the mean squares are known the test statistics can be calculated. For example, the test statistic to test the significance of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (or the hypothesis &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;) can then be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{F}_{0}})}_{A}}= &amp;amp; \frac{M{{S}_{A}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{A}}/dof(S{{S}_{A}})}{S{{S}_{E}}/dof(S{{S}_{E}})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Similarly the test statistic to test significance of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be respectively obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{F}_{0}})}_{B}}= &amp;amp; \frac{M{{S}_{B}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{B}}/dof(S{{S}_{B}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
{{({{F}_{0}})}_{AB}}= &amp;amp; \frac{M{{S}_{AB}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{AB}}/dof(S{{S}_{AB}})}{S{{S}_{E}}/dof(S{{S}_{E}})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is recommended to conduct the test for interactions before conducting the test for the main effects. This is because, if an interaction is present, then the main effect of the factor depends on the level of the other factors and looking at the main effect is of little value. However, if the interaction is absent then the main effects become important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
Consider an experiment to investigate the effect of speed and type of fuel additive used on the mileage of a sports utility vehicle. Three speeds and two types of fuel additives are investigated. Each of the treatment combinations are replicated three times. The mileage values observed are displayed in the fifth table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.5.png|thumb|center|400px|Mileage data for different speeds and fuel additive types.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The experimental design for the data in the fifth table is shown in the figure below. In the figure, the factor Speed is represented as factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and the factor Fuel Additive is represented as factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The experimenter would like to investigate if speed, fuel additive or the interaction between speed and fuel additive affects the mileage of the sports utility vehicle. In other words, the following hypotheses need to be tested:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \begin{matrix}&lt;br /&gt;
   1. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}={{\tau }_{3}}=0\text{   (No main effect of factor }A\text{, speed)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\tau }_{i}}\ne 0\text{    for at least one }i \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   2. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\delta }_{1}}={{\delta }_{2}}={{\delta }_{3}}=0\text{    (No main effect of factor }B\text{, fuel additive)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\delta }_{j}}\ne 0\text{    for at least one }j \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   3. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{12}}=...={{(\tau \delta )}_{33}}=0\text{    (No interaction }AB\text{)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{(\tau \delta )}_{ij}}\ne 0\text{    for at least one }ij  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistics for the three tests are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::1.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{A}}=\frac{M{{S}_{A}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{A}}\,\!&amp;lt;/math&amp;gt; is the mean square for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::2.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{B}}=\frac{M{{S}_{B}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{B}}\,\!&amp;lt;/math&amp;gt; is the mean square for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::3.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{AB}}=\frac{M{{S}_{AB}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; is the mean square for interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.11.png|thumb|center|639px|Experimental design for the data in the fifth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\tau }_{i}}+{{\delta }_{j}}+{{(\tau \delta )}_{ij}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed) with &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; =1, 2, 3; &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th treatment of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (fuel additive) with &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; =1, 2; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect. In order to calculate the test statistics, it is convenient to express the ANOVA model of the equation given above in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;. This can be done as explained next.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represent deviations from the overall mean, the following constraints exist.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or    }{{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}=0\,\!&amp;lt;/math&amp;gt;.) DOE++ displays only the independent effects because only these effects are important to the analysis. The independent effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1] and A[2] respectively because these are the effects associated with factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed).&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{j=1}{\overset{2}{\mathop \sum }}\,{{\delta }_{j}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or    }{{\delta }_{1}}+{{\delta }_{2}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only one of the &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is independent, &amp;lt;math&amp;gt;{{\delta }_{2}}=-{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effect as &amp;lt;math&amp;gt;{{H}_{0}}:{{\delta }_{1}}=0\,\!&amp;lt;/math&amp;gt;.) The independent effect &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is displayed as B:B in DOE++.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{and   }\underset{j=1}{\overset{2}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{31}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{12}}+{{(\tau \delta )}_{22}}+{{(\tau \delta )}_{32}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{and   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{12}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{22}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{31}}+{{(\tau \delta )}_{32}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last five equations given above represent four constraints, as only four of these five equations are independent. Therefore, only two out of the six &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are independent, the other four effects can be expressed in terms of these effects. (The null hypothesis to test the significance of interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{21}}=0\,\!&amp;lt;/math&amp;gt;.) The effects &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are displayed as A[1]B and A[2]B respectively in DOE++.&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be obtained using indicator variables, similar to the case of the single factor experiment in [[ANOVA_for_Designed_Experiments#Fitting_ANOVA_Models|Fitting ANOVA Models]]. Since factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; has three levels, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required which need to be coded as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has two levels and can be represented using one indicator variable, &amp;lt;math&amp;gt;{{x}_{3}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\delta }_{1}}: &amp;amp; {{x}_{3}}=1 \\ &lt;br /&gt;
\text{Treatment Effect }{{\delta }_{2}}: &amp;amp; {{x}_{3}}=-1  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; interaction will be represented by all possible terms resulting from the product of the indicator variables representing factors &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. There are two such terms here - &amp;lt;math&amp;gt;{{x}_{1}}{{x}_{3}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}{{x}_{3}}\,\!&amp;lt;/math&amp;gt;. The regression version of the ANOVA model can finally be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{\tau }_{1}}\cdot {{x}_{1}}+{{\tau }_{2}}\cdot {{x}_{2}}+{{\delta }_{1}}\cdot {{x}_{3}}+{{(\tau \delta )}_{11}}\cdot {{x}_{1}}{{x}_{3}}+{{(\tau \delta )}_{21}}\cdot {{x}_{2}}{{x}_{3}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In matrix notation this model can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{111}}  \\&lt;br /&gt;
   {{Y}_{211}}  \\&lt;br /&gt;
   {{Y}_{311}}  \\&lt;br /&gt;
   {{Y}_{121}}  \\&lt;br /&gt;
   {{Y}_{221}}  \\&lt;br /&gt;
   {{Y}_{321}}  \\&lt;br /&gt;
   {{Y}_{112}}  \\&lt;br /&gt;
   {{Y}_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{323}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; -1 &amp;amp; 0 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
   {{\delta }_{1}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{11}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{21}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{111}}  \\&lt;br /&gt;
   {{\epsilon }_{211}}  \\&lt;br /&gt;
   {{\epsilon }_{311}}  \\&lt;br /&gt;
   {{\epsilon }_{121}}  \\&lt;br /&gt;
   {{\epsilon }_{221}}  \\&lt;br /&gt;
   {{\epsilon }_{321}}  \\&lt;br /&gt;
   {{\epsilon }_{112}}  \\&lt;br /&gt;
   {{\epsilon }_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{323}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The vector &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; can be substituted with the response values from the fifth table to get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{111}}  \\&lt;br /&gt;
   {{Y}_{211}}  \\&lt;br /&gt;
   {{Y}_{311}}  \\&lt;br /&gt;
   {{Y}_{121}}  \\&lt;br /&gt;
   {{Y}_{221}}  \\&lt;br /&gt;
   {{Y}_{321}}  \\&lt;br /&gt;
   {{Y}_{112}}  \\&lt;br /&gt;
   {{Y}_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{323}}  \\&lt;br /&gt;
\end{matrix} \right]=\left[ \begin{matrix}&lt;br /&gt;
   17.3  \\&lt;br /&gt;
   18.9  \\&lt;br /&gt;
   17.1  \\&lt;br /&gt;
   18.7  \\&lt;br /&gt;
   19.1  \\&lt;br /&gt;
   18.8  \\&lt;br /&gt;
   17.8  \\&lt;br /&gt;
   18.2  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18.3  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;, the sum of squares for the ANOVA model and the extra sum of squares for each of the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Sum of Squares for the Model====&lt;br /&gt;
&lt;br /&gt;
The model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, for the regression version of the ANOVA model can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 9.7311  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones. Since five effect terms (&amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;) are used in the model, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; is five (&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=5\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 10.7178  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are 18 observed response values, the number of degrees of freedom associated with the total sum of squares is 17 (&amp;lt;math&amp;gt;dof(S{{S}_{T}})=17\,\!&amp;lt;/math&amp;gt;). The error sum of squares can now be obtained:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 10.7178-9.7311 \\ &lt;br /&gt;
= &amp;amp; 0.9867  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are three replicates of the full factorial experiment, all of the error sum of squares is pure error. (This can also be seen from the preceding figure, where each treatment combination of the full factorial design is repeated three times.) The number of degrees of freedom associated with the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 17-5 \\ &lt;br /&gt;
= &amp;amp; 12  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Extra Sum of Squares for the Factors====&lt;br /&gt;
&lt;br /&gt;
The sequential sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be calculated as: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}})-S{{S}_{TR}}(\mu ) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}={{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}{{(X_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}^{\prime }{{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}})}^{-1}}X_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}^{\prime }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the matrix containing only the first three columns of the &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrix. Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-0 \\ &lt;br /&gt;
= &amp;amp; 4.5811-0 \\ &lt;br /&gt;
= &amp;amp; 4.5811  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent effects (&amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;) for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, the degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; are two (&amp;lt;math&amp;gt;dof(S{{S}_{A}})=2\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Similarly, the sum of squares for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{B}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}})-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}}) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}}}-(\frac{1}{18})J]y-{{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 9.4900-4.5811 \\ &lt;br /&gt;
= &amp;amp; 4.9089  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there is one independent effect, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{B}}\,\!&amp;lt;/math&amp;gt; is one (&amp;lt;math&amp;gt;dof(S{{S}_{B}})=1\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{AB}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}},{{(\tau \delta )}_{11}},{{(\tau \delta )}_{21}})-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}) \\ &lt;br /&gt;
= &amp;amp; S{{S}_{TR}}-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}) \\ &lt;br /&gt;
= &amp;amp; 9.7311-9.4900 \\ &lt;br /&gt;
= &amp;amp; 0.2411  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent interaction effects, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; is two (&amp;lt;math&amp;gt;dof(S{{S}_{AB}})=2\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Calculation of the Test Statistics====&lt;br /&gt;
&lt;br /&gt;
Knowing the sum of squares, the test statistic for each of the factors can be calculated. Analyzing the interaction first, the test statistic for interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{AB}}= &amp;amp; \frac{M{{S}_{AB}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{0.2411/2}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 1.47  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic, based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator, is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{AB}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.7307 \\ &lt;br /&gt;
= &amp;amp; 0.2693  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;gt; 0.1, we fail to reject &amp;lt;math&amp;gt;{{H}_{0}}:{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt; and conclude that the interaction between speed and fuel additive does not significantly affect the mileage of the sports utility vehicle. DOE++ displays this result in the ANOVA table, as shown in the following figure. In the absence of the interaction, the analysis of main effects becomes important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{A}}= &amp;amp; \frac{M{{S}_{A}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{A}}/dof(S{{S}_{A}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{4.5811/2}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 27.86  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{A}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.99997 \\ &lt;br /&gt;
= &amp;amp; 0.00003  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (or speed) has a significant effect on the mileage.&lt;br /&gt;
&lt;br /&gt;
The test statistic for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{B}}= &amp;amp; \frac{M{{S}_{B}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{4.9089/1}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 59.7  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{B}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.999995 \\ &lt;br /&gt;
= &amp;amp; 0.000005  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}:{{\delta }_{j}}=0\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or fuel additive type) has a significant effect on the mileage.&lt;br /&gt;
Therefore, it can be concluded that speed and fuel additive type affect the mileage of the vehicle significantly. The results are displayed in the ANOVA table of the following figure. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.12.png|thumb|center|645px|Analysis results for the experiment in the fifth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Effect Coefficients====&lt;br /&gt;
&lt;br /&gt;
Results for the effect coefficients of the model of the regression version of the ANOVA model are displayed in the Regression Information table in the following figure. Calculations of the results in this table are discussed next. The effect coefficients can be calculated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\hat{\beta }= &amp;amp; {{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y \\ &lt;br /&gt;
= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   18.2889  \\&lt;br /&gt;
   -0.2056  \\&lt;br /&gt;
   0.6944  \\&lt;br /&gt;
   -0.5222  \\&lt;br /&gt;
   0.0056  \\&lt;br /&gt;
   0.1389  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\hat{\mu }=18.2889\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}=-0.2056\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\hat{\tau }}_{2}}=0.6944\,\!&amp;lt;/math&amp;gt; etc. As mentioned previously, these coefficients are displayed as Intercept, A[1] and A[2] respectively depending on the name of the factor used in the experimental design. The standard error for each of these estimates is obtained using the diagonal elements of the variance-covariance matrix &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
C= &amp;amp; {{{\hat{\sigma }}}^{2}}{{({{X}^{\prime }}X)}^{-1}} \\ &lt;br /&gt;
= &amp;amp; M{{S}_{E}}\cdot {{({{X}^{\prime }}X)}^{-1}} \\ &lt;br /&gt;
= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   0.0046 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0.0091 &amp;amp; -0.0046 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; -0.0046 &amp;amp; 0.0091 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0.0046 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0.0091 &amp;amp; -0.0046  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; -0.0046 &amp;amp; 0.0091  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, the standard error for &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
se({{{\hat{\tau }}}_{1}})= &amp;amp; \sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; \sqrt{0.0091} \\ &lt;br /&gt;
= &amp;amp; 0.0956  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\hat{\tau }}}_{1}}}{se({{{\hat{\tau }}}_{1}})} \\ &lt;br /&gt;
= &amp;amp; \frac{-0.2056}{0.0956} \\ &lt;br /&gt;
= &amp;amp; -2.1506  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic is:&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Confidence intervals on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; can also be calculated. The 90% limits on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\hat{\tau }}}_{1}}\pm {{t}_{\alpha /2,n-(k+1)}}\sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; {{\tau }_{1}}\pm {{t}_{0.05,12}}\sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; -0.2056\pm 0.1704  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Thus, the 90% limits on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-0.3760\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-0.0352\,\!&amp;lt;/math&amp;gt; respectively. Results for other coefficients are obtained in a similar manner.&lt;br /&gt;
&lt;br /&gt;
===Least Squares Means===&lt;br /&gt;
The estimated mean response corresponding to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of any factor is obtained using the adjusted estimated mean which is also called the least squares mean. For example, the mean response corresponding to the first level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\mu +{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;. An estimate of this is &amp;lt;math&amp;gt;\hat{\mu }+{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; or (&amp;lt;math&amp;gt;18.2889+(-0.2056)=18.0833\,\!&amp;lt;/math&amp;gt;). Similarly, the estimated response at the third level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\hat{\mu }+{{\hat{\tau }}_{3}}\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\hat{\mu }+(-{{\hat{\tau }}_{1}}-{{\hat{\tau }}_{2}})\,\!&amp;lt;/math&amp;gt; or (&amp;lt;math&amp;gt;18.2889+(0.2056-0.6944)=17.8001\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
As in the case of single factor experiments, plots of residuals can also be used to check for model adequacy in factorial experiments. Box-Cox transformations are also available in DOE++ for factorial experiments.&lt;br /&gt;
&lt;br /&gt;
==Factorial Experiments with a Single Replicate==&lt;br /&gt;
&lt;br /&gt;
If a factorial experiment is run only for a single replicate then it is not possible to test hypotheses about the main effects and interactions as the error sum of squares cannot be obtained.  This is because the number of observations in a single replicate equals the number of terms in the ANOVA model. Hence the model fits the data perfectly and no degrees of freedom are available to obtain the error sum of squares. For example, if the two factor experiment to study the effect of speed and fuel additive type on mileage was run only as a single replicate there would be only six response values. The regression version of the ANOVA model has six terms and therefore will fit the six response values perfectly. The error sum of squares, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt;, for this case will be equal to zero. In some single replicate factorial experiments it is possible to assume that the interaction effects are negligible. In this case, the interaction mean square can be used as error mean square, &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt;, to test hypotheses about the main effects. However, such assumptions are not applicable in all cases and should be used carefully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Blocking==&lt;br /&gt;
&lt;br /&gt;
Many times a factorial experiment requires so many runs that not all of them can be completed under homogeneous conditions. This may lead to inclusion of the effects of &#039;&#039;nuisance factors&#039;&#039; into the investigation. Nuisance factors are factors that have an effect on the response but are not of primary interest to the investigator. For example, two replicates of a two factor factorial experiment require eight runs. If four runs require the duration of one day to be completed, then the total experiment will require two days to be completed. The difference in the conditions on the two days may introduce effects on the response that are not the result of the two factors being investigated. Therefore, the day is a nuisance factor for this experiment.&lt;br /&gt;
Nuisance factors can be accounted for using &#039;&#039;blocking&#039;&#039;. In blocking, experimental runs are separated based on levels of the nuisance factor. For the case of the two factor factorial experiment (where the day is a nuisance factor), separation can be made into two groups or &#039;&#039;blocks&#039;&#039;: runs that are carried out on the first day belong to block 1, and runs that are carried out on the second day belong to block 2. Thus, within each block conditions are the same with respect to the nuisance factor. As a result, each block investigates the effects of the factors of interest, while the difference in the blocks measures the effect of the nuisance factor. &lt;br /&gt;
For the example of the two factor factorial experiment, a possible assignment of runs to the blocks could be as follows: one replicate of the experiment is assigned to block 1 and the second replicate is assigned to block 2 (now each block contains all possible treatment combinations). Within each block, runs are subjected to randomization (i.e., randomization is now restricted to the runs within a block). Such a design, where each block contains one complete replicate and the treatments within a block are subjected to randomization, is called &#039;&#039;randomized complete block design&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
In summary, blocking should always be used to account for the effects of nuisance factors if it is not possible to hold the nuisance factor at a constant level through all of the experimental runs. Randomization should be used within each block to counter the effects of any unknown variability that may still be present.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
Consider the experiment of the fifth table where the mileage of a sports utility vehicle was investigated for the effects of speed and fuel additive type. Now assume that the three replicates for this experiment were carried out on three different vehicles. To ensure that the variation from one vehicle to another does not have an effect on the analysis, each vehicle is considered as one block. See the experiment design in the following figure.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.13.png|thumb|center|643px|Randomized complete block design for the experiment in the fifth table using three blocks.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purpose of the analysis, the block is considered as a main effect except that it is assumed that interactions between the block and the other main effects do not exist. Therefore, there is one block main effect (having three levels - block 1, block 2 and block 3), two main effects (speed -having three levels; and fuel additive type - having two levels) and one interaction effect (speed-fuel additive interaction) for this experiment. Let &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; represent the block effects. The hypothesis test on the block main effect checks if there is a significant variation from one vehicle to the other. The statements for the hypothesis test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\zeta }_{1}}={{\zeta }_{2}}={{\zeta }_{3}}=0\text{   (no main effect of block)} \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\zeta }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The test statistic for this test is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{Block}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{Block}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the block main effect and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. The hypothesis statements and test statistics to test the significance of factors &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed), &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (fuel additive) and the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; (speed-fuel additive interaction) can be obtained as explained in the [[ANOVA_for_Designed_Experiments#Example_2| example]]. The ANOVA model for this example can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\zeta }_{i}}+{{\tau }_{j}}+{{\delta }_{k}}+{{(\tau \delta )}_{jk}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean effect&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of the block (&amp;lt;math&amp;gt;i=1,2,3\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;j=1,2,3\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;k=1,2\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*and &amp;lt;math&amp;gt;{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt; represents the random error terms (which are assumed to be normally distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to calculate the test statistics, it is convenient to express the ANOVA model of the equation given above in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;. This can be done as explained next.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; are defined as deviations from the overall mean, the following constraints exist.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{i=1}{\overset{3}{\mathop \sum }}\,{{\zeta }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\zeta }_{1}}+{{\zeta }_{2}}+{{\zeta }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\zeta }_{3}}=-({{\zeta }_{1}}+{{\zeta }_{2}})\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of the blocks can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{\zeta }_{1}}={{\zeta }_{2}}=0\,\!&amp;lt;/math&amp;gt;.) In DOE++, the independent block effects, &amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as Block[1] and Block[2], respectively.&lt;br /&gt;
&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{j=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{j}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;. The independent effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1] and A[2], respectively.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{k=1}{\overset{2}{\mathop \sum }}\,{{\delta }_{k}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\delta }_{1}}+{{\delta }_{2}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only one of the &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; effects is independent. Assuming that &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is independent, &amp;lt;math&amp;gt;{{\delta }_{2}}=-{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;. The independent effect, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, is displayed as B:B.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{j=1}{\overset{3}{\mathop \sum }}\,{{(\tau \delta )}_{jk}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{and   }\underset{k=1}{\overset{2}{\mathop \sum }}\,{{(\tau \delta )}_{jk}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{31}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{12}}+{{(\tau \delta )}_{22}}+{{(\tau \delta )}_{32}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{and   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{12}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{21}}+{{(\tau \delta )}_{22}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{31}}+{{(\tau \delta )}_{32}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last five equations given above represent four constraints as only four of the five equations are independent. Therefore, only two out of the six &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are independent, we can express the other four effects in terms of these effects. The independent effects, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1]B and A[2]B, respectively.&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be obtained using indicator variables. Since the block has three levels, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required, which need to be coded as shown next: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Block 1}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0\text{ } \\ &lt;br /&gt;
 &amp;amp; \text{Block 2}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{         } \\ &lt;br /&gt;
 &amp;amp; \text{Block 3}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{   }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; has three levels and two indicator variables, &amp;lt;math&amp;gt;{{x}_{3}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{4}}\,\!&amp;lt;/math&amp;gt;, are required:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{3}}=1,\text{   }{{x}_{4}}=0 \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{3}}=0,\text{   }{{x}_{4}}=1\text{           } \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{3}}=-1,\text{   }{{x}_{4}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has two levels and can be represented using one indicator variable, &amp;lt;math&amp;gt;{{x}_{5}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Treatment Effect }{{\delta }_{1}}: &amp;amp; {{x}_{5}}=1 \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\delta }_{2}}: &amp;amp; {{x}_{5}}=-1  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; interaction will be represented by &amp;lt;math&amp;gt;{{x}_{3}}{{x}_{5}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{4}}{{x}_{5}}\,\!&amp;lt;/math&amp;gt;. The regression version of the ANOVA model can finally be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{\zeta }_{1}}\cdot {{x}_{1}}+{{\zeta }_{2}}\cdot {{x}_{2}}+{{\tau }_{1}}\cdot {{x}_{3}}+{{\tau }_{2}}\cdot {{x}_{4}}+{{\delta }_{1}}\cdot {{x}_{5}}+{{(\tau \delta )}_{11}}\cdot {{x}_{3}}{{x}_{5}}+{{(\tau \delta )}_{21}}\cdot {{x}_{4}}{{x}_{5}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In matrix notation this model can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:or:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   17.3  \\&lt;br /&gt;
   18.9  \\&lt;br /&gt;
   17.1  \\&lt;br /&gt;
   18.7  \\&lt;br /&gt;
   19.1  \\&lt;br /&gt;
   18.8  \\&lt;br /&gt;
   17.8  \\&lt;br /&gt;
   18.2  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18.3  \\&lt;br /&gt;
\end{matrix} \right]=\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; -1 &amp;amp; 0 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\zeta }_{1}}  \\&lt;br /&gt;
   {{\zeta }_{2}}  \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
   {{\delta }_{1}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{11}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{21}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{111}}  \\&lt;br /&gt;
   {{\epsilon }_{121}}  \\&lt;br /&gt;
   {{\epsilon }_{131}}  \\&lt;br /&gt;
   {{\epsilon }_{112}}  \\&lt;br /&gt;
   {{\epsilon }_{122}}  \\&lt;br /&gt;
   {{\epsilon }_{132}}  \\&lt;br /&gt;
   {{\epsilon }_{211}}  \\&lt;br /&gt;
   {{\epsilon }_{221}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{332}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;, the sum of squares for the ANOVA model and the extra sum of squares for each of the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Sum of Squares for the Model====&lt;br /&gt;
&lt;br /&gt;
The model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, for the ANOVA model of this example can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 9.9256  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since seven effect terms (&amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;) are used in the model the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; is seven (&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=7\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The total sum of squares can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 10.7178  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are 18 observed response values, the number of degrees of freedom associated with the total sum of squares is 17 (&amp;lt;math&amp;gt;dof(S{{S}_{T}})=17\,\!&amp;lt;/math&amp;gt;). The error sum of squares can now be obtained:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 10.7178-9.9256 \\ &lt;br /&gt;
= &amp;amp; 0.7922  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 17-7 \\ &lt;br /&gt;
= &amp;amp; 10  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are no true replicates of the treatments (as can be seen from the design of the previous figure, where all of the treatments are seen to be run just once), all of the error sum of squares is the sum of squares due to lack of fit. The lack of fit arises because the model used is not a full model since it is assumed that there are no interactions between blocks and other effects.&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Extra Sum of Squares for the Factors====&lt;br /&gt;
&lt;br /&gt;
The sequential sum of squares for the blocks can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{Block}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}})-S{{S}_{TR}}(\mu ) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y-0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones, &amp;lt;math&amp;gt;{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the hat matrix, which is calculated using &amp;lt;math&amp;gt;{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}={{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}{{(X_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}^{\prime }{{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}})}^{-1}}X_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}^{\prime }\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the matrix containing only the first three columns of the &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrix. Thus&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{Block}}= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y-0 \\ &lt;br /&gt;
= &amp;amp; 0.1944-0 \\ &lt;br /&gt;
= &amp;amp; 0.1944  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent block effects,    and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{Blocks}}\,\!&amp;lt;/math&amp;gt; is two (&amp;lt;math&amp;gt;dof(S{{S}_{Blocks}})=2\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Similarly, the sequential sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}},{{\tau }_{1}},{{\tau }_{2}})-S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}}) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}},{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-{{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 4.7756-0.1944 \\ &lt;br /&gt;
= &amp;amp; 4.5812  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sequential sum of squares for the other effects are obtained as &amp;lt;math&amp;gt;S{{S}_{B}}=4.9089\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{AB}}=0.2411\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Test Statistics====&lt;br /&gt;
&lt;br /&gt;
Knowing the sum of squares, the test statistics for each of the factors can be calculated. For example, the test statistic for the main effect of the blocks is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{Block}}= &amp;amp; \frac{M{{S}_{Block}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{Block}}/dof(S{{S}_{Blocks}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{0.1944/2}{0.7922/10} \\ &lt;br /&gt;
= &amp;amp; 1.227  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 10 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{Block}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.6663 \\ &lt;br /&gt;
= &amp;amp; 0.3337  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;gt; 0.1, we fail to reject &amp;lt;math&amp;gt;{{H}_{0}}:{{\zeta }_{i}}=0\,\!&amp;lt;/math&amp;gt; and conclude that there is no significant variation in the mileage from one vehicle to the other. Statistics to test the significance of other factors can be calculated in a similar manner. The complete analysis results obtained from DOE++ for this experiment are presented in the following figure.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.14.png|thumb|center|644px|Analysis results for the experiment in the [[ANOVA_for_Designed_Experiments#Example_3| example]].]]&lt;br /&gt;
&lt;br /&gt;
==Use of Regression to Calculate Sum of Squares==&lt;br /&gt;
&lt;br /&gt;
This section explains the reason behind the use of regression in DOE++ in all calculations related to the sum of squares. A number of textbooks present the method of direct summation to calculate the sum of squares. But this method is only applicable for balanced designs and may give incorrect results for unbalanced designs. For example, the sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; in a balanced factorial experiment with two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, is given as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{A}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,{{n}_{b}}n{{({{{\bar{y}}}_{i..}}-{{{\bar{y}}}_{...}})}^{2}} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{b}}n}-\frac{y_{...}^{2}}{{{n}_{a}}{{n}_{b}}n}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{n}_{b}}\,\!&amp;lt;/math&amp;gt; represents the levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; represents the number of samples for each combination of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The term &amp;lt;math&amp;gt;{{\bar{y}}_{i..}}\,\!&amp;lt;/math&amp;gt; is the mean value for the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{y}_{i..}}\,\!&amp;lt;/math&amp;gt; is the sum of all observations at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{y}_{...}}\,\!&amp;lt;/math&amp;gt; is the sum of all observations.&lt;br /&gt;
&lt;br /&gt;
The analogous term to calculate &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; in the case of an unbalanced design is given as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{A}}=\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\frac{y_{...}^{2}}{{{n}_{..}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{n}_{i.}}\,\!&amp;lt;/math&amp;gt; is the number of observations at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{n}_{..}}\,\!&amp;lt;/math&amp;gt; is the total number of observations. Similarly, to calculate the sum of squares for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;, the formulas are given as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{B}}= &amp;amp; \underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}-\frac{y_{...}^{2}}{{{n}_{..}}} \\ &lt;br /&gt;
 &amp;amp; S{{S}_{AB}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{ij.}^{2}}{{{n}_{ij}}}-\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}+\frac{y_{...}^{2}}{{{n}_{..}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Applying these relations to the unbalanced data of the last table, the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{AB}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{ij.}^{2}}{{{n}_{ij}}}-\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}+\frac{y_{...}^{2}}{{{n}_{..}}} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; \left( {{6}^{2}}+{{4}^{2}}+\frac{{{(42+6)}^{2}}}{2}+{{12}^{2}} \right)-\left( \frac{{{10}^{2}}}{2}+\frac{{{60}^{2}}}{3} \right) \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; -\left( \frac{{{54}^{2}}}{3}+\frac{{{16}^{2}}}{2} \right)+\frac{{{70}^{2}}}{5} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; -22  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
which is obviously incorrect since the sum of squares cannot be negative. For a detailed discussion on this refer to [[DOE References|Searle(1997, 1971)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.6.png|thumb|center|400px|Example of an unbalanced design.]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
The correct sum of squares can be calculated as shown next. The &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrices for the design of the last table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   4  \\&lt;br /&gt;
   6  \\&lt;br /&gt;
   12  \\&lt;br /&gt;
   42  \\&lt;br /&gt;
\end{matrix} \right]\text{   and   }X=\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{AB}}={{y}^{\prime }}[H-(1/5)J]y-{{y}^{\prime }}[{{H}_{\tilde{\ }AB}}-(1/5)J]y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones. The matrix &amp;lt;math&amp;gt;{{H}_{\tilde{\ }AB}}\,\!&amp;lt;/math&amp;gt; can be calculated using &amp;lt;math&amp;gt;{{H}_{\tilde{\ }AB}}={{X}_{\tilde{\ }AB}}{{(X_{\tilde{\ }AB}^{\prime }{{X}_{\tilde{\ }AB}})}^{-1}}X_{\tilde{\ }AB}^{\prime }\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;{{X}_{\tilde{\ }AB}}\,\!&amp;lt;/math&amp;gt; is the design matrix, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, excluding the last column that represents the interaction effect &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;. Thus, the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{AB}}= &amp;amp; {{y}^{\prime }}[H-(1/5)J]y-{{y}^{\prime }}[{{H}_{\tilde{\ }AB}}-(1/5)J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 368-339.4286 \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 28.5714  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the value that is calculated by DOE++ (see the first figure below, for the experiment design and the second figure below for the analysis).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.15.png|thumb|center|471px|Unbalanced experimental design for the data in the last table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.16.png|thumb|center|471px|Analysis for the unbalanced data in the last table.]]&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ANOVA_for_Designed_Experiments&amp;diff=65260</id>
		<title>ANOVA for Designed Experiments</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ANOVA_for_Designed_Experiments&amp;diff=65260"/>
		<updated>2017-08-26T03:28:53Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Box-Cox Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Doebook|5}}&lt;br /&gt;
In [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], methods were presented to model the relationship between a response and the associated factors (referred to as predictor variables in the context of regression) based on an observed data set. Such studies, where observed values of the response are used to establish an association between the response and the factors, are called &#039;&#039;observational studies&#039;&#039;. However, in the case of observational studies, it is difficult to establish a cause-and-effect relationship between the observed factors and the response. This is because a number of alternative justifications can be used to explain the observed change in the response values. For example, a regression model fitted to data on the population of cities and road accidents might show a positive regression relation. However, this relation does not imply that an increase in a city&#039;s population causes an increase in road accidents. It could be that a number of other factors such as road conditions, traffic control and the degree to which the residents of the city follow the traffic rules affect the number of road accidents in the city and the increase in the number of accidents seen in the study is caused by these factors. Since the observational study does not take the effect of these factors into account, the assumption that an increase in a city&#039;s population will lead to an increase in road accidents is not a valid one. For example, the population of a city may increase but road accidents in the city may decrease because of better traffic control. To establish a cause-and-effect relationship, the study should be conducted in such a way that the effect of all other factors is excluded from the investigation.&lt;br /&gt;
&lt;br /&gt;
The studies that enable the establishment of a cause-and-effect relationship are called &#039;&#039;experiments&#039;&#039;. In experiments the response is investigated by studying only the effect of the factor(s) of interest and excluding all other effects that may provide alternative justifications to the observed change in response. This is done in two ways. First, the levels of the factors to be investigated are carefully selected and then strictly controlled during the execution of the experiment. The aspect of selecting what factor levels should be investigated in the experiment is called the &#039;&#039;design&#039;&#039; of the experiment. The second distinguishing feature of experiments is that observations in an experiment are recorded in a random order. By doing this, it is hoped that the effect of all other factors not being investigated in the experiment will get cancelled out so that the change in the response is the result of only the investigated factors. Using these two techniques, experiments tend to ensure that alternative justifications to observed changes in the response are voided, thereby enabling the establishment of a cause-and-effect relationship between the response and the investigated factors.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Randomization&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The aspect of recording observations in an experiment in a random order is referred to as &#039;&#039;randomization&#039;&#039;. Specifically, randomization is the process of assigning the various levels of the investigated factors to the experimental units in a random fashion.  An experiment is said to be &#039;&#039;completely randomized&#039;&#039; if the probability of an experimental unit to be subjected to any level of a factor is equal for all the experimental units. The importance of randomization can be illustrated using an example. Consider an experiment where the effect of the speed of a lathe machine on the surface finish of a product is being investigated. In order to save time, the experimenter records surface finish values by running the lathe machine continuously and recording observations in the order of increasing speeds. The analysis of the experiment data shows that an increase in lathe speeds causes a decrease in the quality of surface finish. However the results of the experiment are disputed by the lathe operator who claims that he has been able to obtain better surface finish quality in the products by operating the lathe machine at higher speeds. It is later found that the faulty results were caused because of overheating of the tool used in the machine. Since the lathe was run continuously in the order of increased speeds the observations were recorded in the order of increased tool temperatures. This problem could have been avoided if the experimenter had randomized the experiment and taken reading at the various lathe speeds in a random fashion. This would require the experimenter to stop and restart the machine at every observation, thereby keeping the temperature of the tool within a reasonable range. Randomization would have ensured that the effect of heating of the machine tool is not included in the experiment.&lt;br /&gt;
&lt;br /&gt;
==Analysis of Single Factor Experiments==&lt;br /&gt;
&lt;br /&gt;
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], the analysis of observational studies involves the use of regression models. The analysis of experimental studies involves the use of analysis of variance (ANOVA) models. For a comparison of the two models see [[ANOVA_for_Designed_Experiments#Fitting_ANOVA_Models|Fitting ANOVA Models]]. In single factor experiments, ANOVA models are used to compare the mean response values at different levels of the factor. Each level of the factor is investigated to see if the response is significantly different from the response at other levels of the factor.  The analysis of single factor experiments is often referred to as &#039;&#039;one-way ANOVA&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To illustrate the use of ANOVA models in the analysis of experiments, consider a single factor experiment where the analyst wants to see if the surface finish of certain parts is affected by the speed of a lathe machine. Data is collected for three speeds (or three treatments). Each treatment is replicated four times. Therefore, this experiment design is balanced. Surface finish values recorded using randomization are shown in the following table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.1.png|thumb|center|400px|Surface finish values for three speeds of a lathe machine.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be stated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}={{\mu }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model assumes that the response at each factor level, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, is the sum of the mean response at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, and a random error term, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. The subscript &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; denotes the factor level while the subscript &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; denotes the replicate. If there are &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of the factor and &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates at each level then &amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j=1,2,...,m\,\!&amp;lt;/math&amp;gt;. The random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are assumed to be normally and independently distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. Therefore, the response at each level can be thought of as a normally distributed population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and constant variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. The equation given above is referred to as the &#039;&#039;means model&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The ANOVA model of the means model can also be written using &amp;lt;math&amp;gt;{{\mu }_{i}}=\mu +{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the effect due to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Such an ANOVA model is called the &#039;&#039;effects model&#039;&#039;.  In the effects models the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, represent the deviations from the overall mean, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;. Therefore, the following constraint exists on the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;s: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Fitting ANOVA Models===&lt;br /&gt;
&lt;br /&gt;
To fit ANOVA models and carry out hypothesis testing in single factor experiments, it is convenient to express the effects model of the effects model in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt; (that was used for multiple linear regression models in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This can be done as shown next. Using the effects model, the ANOVA model for the single factor experiment in the first table can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment effect. There are three treatments in the first table (500, 600 and 700). Therefore, there are three treatment effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{3}}\,\!&amp;lt;/math&amp;gt;. The following constraint exists for these effects:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   } {{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the first treatment, the ANOVA model for the single factor experiment in the above table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{1j}}=\mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+0\cdot {{\tau }_{3}}+{{\epsilon }_{1j}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;, the model for the first treatment is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}-0\cdot ({{\tau }_{1}}+{{\tau }_{2}})+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{or   }{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Models for the second and third treatments can be obtained in a similar way. The models for the three treatments are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{First Treatment}: &amp;amp; {{Y}_{1j}}=1\cdot \mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{Second Treatment}: &amp;amp; {{Y}_{2j}}=1\cdot \mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{2j}} \\ &lt;br /&gt;
\text{Third Treatment}: &amp;amp; {{Y}_{3j}}=1\cdot \mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{3j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coefficients of the treatment effects &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; can be expressed using two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the indicator variables &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, the ANOVA model for the data in the first table now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{x}_{1}}\cdot {{\tau }_{1}}+{{x}_{2}}\cdot {{\tau }_{2}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation can be rewritten by including subscripts &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; (for the level of the factor) and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; (for the replicate number) as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{x}_{i1}}\cdot {{\tau }_{1}}+{{x}_{i2}}\cdot {{\tau }_{2}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation given above represents the &amp;quot;regression version&amp;quot; of the ANOVA model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Treat Numerical Factors as Qualitative or Quantitative ?===&lt;br /&gt;
&lt;br /&gt;
It can be seen from the equation given above that in an ANOVA model each factor is treated as a qualitative factor. In the present example the factor, lathe speed, is a quantitative factor with three levels. But the ANOVA model treats this factor as a qualitative factor with three levels. Therefore, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required to represent this factor. &lt;br /&gt;
&lt;br /&gt;
Note that in a regression model a variable can either be treated as a quantitative or a qualitative variable. The factor, lathe speed, would be used as a quantitative factor and represented with a single predictor variable in a regression model. For example, if a first order model were to be fitted to the data in the first table, then the regression model would take the form &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. If a second order regression model were to be fitted, the regression model would be &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}x_{i1}^{2}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. Notice that unlike these regression models, the regression version of the ANOVA model does not make any assumption about the nature of relationship between the response and the factor being investigated.&lt;br /&gt;
&lt;br /&gt;
The choice of treating a particular factor as a quantitative or qualitative variable depends on the objective of the experimenter. In the case of the data of the first table, the objective of the experimenter is to compare the levels of the factor to see if change in the levels leads to a significant change in the response.  The objective is not to make predictions on the response for a given level of the factor. Therefore, the factor is treated as a qualitative factor in this case. If the objective of the experimenter were prediction or optimization, the experimenter would focus on aspects such as the nature of relationship between the factor, lathe speed, and the response, surface finish, so that the factor should be modeled as a quantitative factor to make accurate predictions.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model in the Form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be expanded for the three treatments and four replicates of the data in the first table as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{11}}= &amp;amp; 6=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{11}}\text{    Level 1, Replicate 1} \\ &lt;br /&gt;
{{Y}_{21}}= &amp;amp; 13=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{21}}\text{  Level 2, Replicate 1} \\ &lt;br /&gt;
{{Y}_{31}}= &amp;amp; 23=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{31}}\text{  Level 3, Replicate 1} \\ &lt;br /&gt;
{{Y}_{12}}= &amp;amp; 13=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{12}}\text{  Level 1, Replicate 2} \\ &lt;br /&gt;
{{Y}_{22}}= &amp;amp; 16=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{22}}\text{  Level 2, Replicate 2} \\ &lt;br /&gt;
{{Y}_{32}}= &amp;amp; 20=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{32}}\text{  Level 3, Replicate 2} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; ... \\ &lt;br /&gt;
{{Y}_{34}}= &amp;amp; 18=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{34}}\text{  Level 3, Replicate 4}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The corresponding matrix notation is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{11}}  \\&lt;br /&gt;
   {{Y}_{21}}  \\&lt;br /&gt;
   {{Y}_{31}}  \\&lt;br /&gt;
   {{Y}_{12}}  \\&lt;br /&gt;
   {{Y}_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{34}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y= &amp;amp; X\beta +\epsilon  \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   23  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   16  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The matrices &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are used in the calculation of the sum of squares in the next section. The data in the first table can be entered into DOE++ as shown in the figure below.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.1.png|thumb|center|550px|Single factor experiment design for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Test in Single Factor Experiments===&lt;br /&gt;
&lt;br /&gt;
The hypothesis test in single factor experiments examines the ANOVA model to see if the response at any level of the investigated factor is significantly different from that at the other levels. If this is not the case and the response at all levels is not significantly different, then it can be concluded that the investigated factor does not affect the response. The test on the ANOVA model is carried out by checking to see if any of the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, are non-zero. The test is similar to the test of significance of regression mentioned in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] in the context of regression models. The hypotheses statements for this test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0 \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\tau }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test for &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is carried out using the following statistic:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{TR}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the ANOVA model and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. Note that in the case of ANOVA models we use the notation &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment mean square) for the model mean square and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment sum of squares) for the model sum of squares (instead of &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression mean square, and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression sum of squares, used in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This is done to indicate that the model under consideration is the ANOVA model and not the regression model. The calculations to obtain &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; are identical to the calculations to obtain &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt; explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The sum of squares to obtain the statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt; can be calculated as explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. Using the data in the first table, the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.1667 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.1667 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.1667  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 232.1667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the number of levels of the factor, &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; represents the replicates at each level, &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; represents the vector of the response values, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; represents the matrix of ones. (For details on each of these terms, refer to [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].)&lt;br /&gt;
Since two effect terms, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are used in the regression version of the ANOVA model, the degrees of freedom associated with the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, is two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be obtained as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.9167 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.9167 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.9167  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 306.6667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;I\,\!&amp;lt;/math&amp;gt; is the identity matrix. Since there are 12 data points in all, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; is 11. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{T}})=11\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 306.6667-232.1667 \\ &lt;br /&gt;
= &amp;amp; 74.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 11-2 \\ &lt;br /&gt;
= &amp;amp; 9  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic can now be calculated using the equation given in [[ANOVA_for_Designed_Experiments#Hypothesis_Test_in_Single_Factor_Experiments|Hypothesis Test in Single Factor Experiments]] as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{0}}= &amp;amp; \frac{M{{S}_{TR}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{TR}}/dof(S{{S}_{TR}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{232.1667/2}{74.5/9} \\ &lt;br /&gt;
= &amp;amp; 14.0235  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value for the statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 9 degrees of freedom in the denominator is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{f}_{0}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.9983 \\ &lt;br /&gt;
= &amp;amp; 0.0017  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that change in the lathe speed has a significant effect on the surface finish. DOE++ displays these results in the ANOVA table, as shown in the figure below. The values of S and R-sq are the standard error and the coefficient of determination for the model, respectively. These values are explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] and indicate how well the model fits the data. The values in the figure below indicate that the fit of the ANOVA model is fair.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.2.png|thumb|center|650px|ANOVA table for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the &#039;&#039;i&#039;&#039;&amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; Treatment Mean===&lt;br /&gt;
&lt;br /&gt;
The response at each treatment of a single factor experiment can be assumed to be a normal population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt; provided that the error terms can be assumed to be normally distributed. A point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is the average response at each treatment, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;. Since this is a sample average, the associated variance is &amp;lt;math&amp;gt;{{\sigma }^{2}}/{{m}_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;{{m}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of replicates at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment. Therefore, the confidence interval on &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution. Recall from [[Statistical_Background_on_DOE| Statistical Background on DOE]] (inference on population mean when variance is unknown) that: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}/{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{M{{S}_{E}}/{{m}_{i}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
has a &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with degrees of freedom &amp;lt;math&amp;gt;=dof(S{{S}_{E}})\,\!&amp;lt;/math&amp;gt;. Therefore, a 100 (&amp;lt;math&amp;gt;1-\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment mean, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, for the first treatment of the lathe speed we have:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}= &amp;amp; {{{\bar{y}}}_{1\cdot }} \\ &lt;br /&gt;
= &amp;amp; \frac{6+13+7+8}{4} \\ &lt;br /&gt;
= &amp;amp; 8.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In DOE++, this value is displayed as the Estimated Mean for the first level, as shown in the Data Summary table in the figure below. The value displayed as the standard deviation for this level is simply the sample standard deviation calculated using the observations corresponding to this level.  The 90% confidence interval for this treatment is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{M{{S}_{E}}}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833\sqrt{\frac{(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833(1.44) \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 2.64  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}\,\!&amp;lt;/math&amp;gt; are 5.9 and 11.1, respectively. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.3.png|thumb|center|650px|Data Summary table for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the Difference in Two Treatment Means===&lt;br /&gt;
&lt;br /&gt;
The confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is used to compare two levels of the factor at a given significance. If the confidence interval does not include the value of zero, it is concluded that the two levels of the factor are significantly different. The point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt;. The variance for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})= &amp;amp; var({{{\bar{y}}}_{i\cdot }})+var({{{\bar{y}}}_{j\cdot }}) \\ &lt;br /&gt;
= &amp;amp; {{\sigma }^{2}}/{{m}_{i}}+{{\sigma }^{2}}/{{m}_{j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For balanced designs all &amp;lt;math&amp;gt;{{m}_{i}}=m\,\!&amp;lt;/math&amp;gt;. Therefore:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})=2{{\sigma }^{2}}/m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The standard deviation for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; can be obtained by taking the square root of &amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})\,\!&amp;lt;/math&amp;gt; and is referred to as the pooled standard error:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for the difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2{{{\hat{\sigma }}}^{2}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2M{{S}_{E}}/m}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then a 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, an estimate of the difference in the first and second treatment means of the lathe speed, &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}-{{{\hat{\mu }}}_{2}}= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }} \\ &lt;br /&gt;
= &amp;amp; 8.5-13.25 \\ &lt;br /&gt;
= &amp;amp; -4.75  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The pooled standard error for this difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2M{{S}_{E}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 2.0344  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To test &amp;lt;math&amp;gt;{{H}_{0}}:{{\mu }_{1}}-{{\mu }_{2}}=0\,\!&amp;lt;/math&amp;gt;, the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}-({{\mu }_{1}}-{{\mu }_{2}})}{\sqrt{2M{{S}_{E}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75-(0)}{\sqrt{\tfrac{2(74.5/9)}{4}}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75}{2.0344} \\ &lt;br /&gt;
= &amp;amp; -2.3348  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In DOE++, the value of the statistic is displayed in the Mean Comparisons table under the column T Value as shown in the figure below. The 90% confidence interval on the difference &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833\sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833(2.0344) \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 3.729  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hence the 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-8.479\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1.021\,\!&amp;lt;/math&amp;gt;, respectively. These values are displayed under the Low CI and High CI columns in the following figure. Since the confidence interval for this pair of means does not included zero, it can be concluded that these means are significantly different at 90% confidence. This conclusion can also be arrived at using the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value noting that the hypothesis is two-sided. The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to the statistic &amp;lt;math&amp;gt;{{t}_{0}}=-2.3348\,\!&amp;lt;/math&amp;gt;, based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with 9 degrees of freedom is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 2\times (1-P(T\le |{{t}_{0}}|)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-P(T\le 2.3348)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-0.9778) \\ &lt;br /&gt;
= &amp;amp; 0.0444  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, the means are significantly different at 90% confidence. Bounds on the difference between other treatment pairs can be obtained in a similar manner and it is concluded that all treatments are significantly different.&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
&lt;br /&gt;
Plots of residuals, &amp;lt;math&amp;gt;{{e}_{ij}}\,\!&amp;lt;/math&amp;gt;, similar to the ones discussed in the previous chapters on regression, are used to ensure that the assumptions associated with the ANOVA model are not violated.  The ANOVA model assumes that the random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are normally and independently distributed with the same variance for each treatment. The normality assumption can be checked by obtaining a normal probability plot of the residuals. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.4.png|thumb|center|644px|Mean Comparisons table for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equality of variance is checked by plotting residuals against the treatments and the treatment averages, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;  (also referred to as fitted values), and inspecting the spread in the residuals. If a pattern is seen in these plots, then this indicates the need to use a suitable transformation on the response that will ensure variance equality. Box-Cox transformations are discussed in the next section. To check for independence of the random error terms residuals are plotted against time or run-order to ensure that a pattern does not exist in these plots. Residual plots for the given example are shown in the following two figures. The plots show that the assumptions associated with the ANOVA model are not violated.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.5.png|thumb|center|550px|Normal probability plot of residuals for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.6.png|thumb|center|550px|Plot of residuals against fitted values for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Box-Cox Method===&lt;br /&gt;
&lt;br /&gt;
Transformations on the response may be used when residual plots for an experiment show a pattern. This indicates that the equality of variance does not hold for the residuals of the given model. The Box-Cox method can be used to automatically identify a suitable power transformation for the data based on the following relationship:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{*}}={{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is determined using the given data such that &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is minimized. The values of &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; are not used as is because of issues related to calculation or comparison of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values for different values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. For example, for &amp;lt;math&amp;gt;\lambda =0\,\!&amp;lt;/math&amp;gt; all response values will become 1. Therefore, the following relationship is used to obtain &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y^{\lambda }=\left\{ &lt;br /&gt;
\begin{array}{cc}&lt;br /&gt;
\frac{y^{\lambda }-1}{\lambda \dot{y}^{\lambda -1}} &amp;amp; \lambda \neq 0 \\ &lt;br /&gt;
\dot{y}\ln y &amp;amp; \lambda =0&lt;br /&gt;
\end{array}&lt;br /&gt;
\right.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\dot{y}=\exp \left[ \frac{1}{n}\sum\limits_{i=1}^{n}y_{i}\right]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Once all &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; values are obtained for a value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, the corresponding &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for these values is obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;. The process is repeated for a number of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values to obtain a plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is selected as the required transformation for the given data. DOE++ plots &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values because the range of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values is large and if this is not done, all values cannot be displayed on the same plot. The range of search for the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value in the software is from &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt;, because larger values of of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are usually not meaningful. DOE++ also displays a recommended transformation based on the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value obtained as shown in the table below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=&amp;quot;2&amp;quot; cellpadding=&amp;quot;5&amp;quot; align=&amp;quot;center&amp;quot; &amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;font size=&amp;quot;3&amp;quot;&amp;gt;Best Lambda&amp;lt;/font&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;font size=&amp;quot;3&amp;quot;&amp;gt;Recommended Transformation&amp;lt;/font&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&amp;lt;font size=&amp;quot;3&amp;quot;&amp;gt;Equation&amp;lt;/font&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;-2.5&amp;lt;\lambda \leq -1.5\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Power} \\ \lambda =-2\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\frac{1}{Y^{2}}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;-1.5&amp;lt;\lambda \leq -0.75\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Reciprocal} \\ \lambda =-1\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\frac{1}{Y}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;-0.75&amp;lt;\lambda \leq -0.25\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Reciprocal Square Root} \\ \lambda =-0.5\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\frac{1}{\sqrt{Y}}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;-0.25&amp;lt;\lambda \leq 0.25\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Natural Log} \\ \lambda =0\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\ln Y\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;0.25&amp;lt;\lambda \leq 0.75\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Square Root} \\ \lambda =0.5\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=\sqrt{Y}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;0.75&amp;lt;\lambda \leq 1.5\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{None} \\ \lambda =1\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=Y\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr align=&amp;quot;center&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;1.5&amp;lt;\lambda \leq 2.5\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;\begin{array}{c}\text{Power} \\ \lambda =2\end{array}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td&amp;gt;&amp;lt;math&amp;gt;Y^{\ast }=Y^{2}\,\!&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.2.png|thumb|center|400px|Recommended Box-Cox power transformations.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Confidence intervals on the selected &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values are also available. Let &amp;lt;math&amp;gt;S{{S}_{E}}(\lambda )\,\!&amp;lt;/math&amp;gt; be the value of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; corresponding to the selected value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then, to calculate the 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence intervals on &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, we need to calculate &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}^{*}}=S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The required limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are the two values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the value &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; (on the plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;). If the limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; do not include the value of one, then the transformation is applicable for the given data.&lt;br /&gt;
Note that the power transformations are not defined for response values that are negative or zero. DOE++ deals with negative and zero response values using the following equations (that involve addition of a suitable quantity to all of the response values if a zero or negative response value is encountered). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{rll}&lt;br /&gt;
y\left( i\right) =&amp;amp; y\left( i\right) +\left| y_{\min }\right|\times 1.1 &amp;amp; \text{Negative Response} \\ &lt;br /&gt;
y\left( i\right) =&amp;amp; y\left( i\right) +1 &amp;amp; \text{Zero Response}&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;math&amp;gt;{{y}_{\min }}\,\!&amp;lt;/math&amp;gt; represents the minimum response value and &amp;lt;math&amp;gt;\left| {{y}_{\min }} \right|\,\!&amp;lt;/math&amp;gt; represents the absolute value of the minimum response. &lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
To illustrate the Box-Cox method, consider the experiment given in the first table. Transformed response values for various values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be calculated using the equation for &amp;lt;math&amp;gt;{Y}^{\lambda}\,\!&amp;lt;/math&amp;gt; given in [[ANOVA_for_Designed_Experiments#Box-Cox_Method|Box-Cox Method]]. Knowing the hat matrix, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values corresponding to each of these &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values can easily be obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values calculated for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values between &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt; for the given data are shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \lambda  &amp;amp; {} &amp;amp; S{{S}_{E}} &amp;amp; \ln S{{S}_{E}}  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   -5 &amp;amp; {} &amp;amp; 5947.8 &amp;amp; 8.6908  \\&lt;br /&gt;
   -4 &amp;amp; {} &amp;amp; 1946.4 &amp;amp; 7.5737  \\&lt;br /&gt;
   -3 &amp;amp; {} &amp;amp; 696.5 &amp;amp; 6.5461  \\&lt;br /&gt;
   -2 &amp;amp; {} &amp;amp; 282.2 &amp;amp; 5.6425  \\&lt;br /&gt;
   -1 &amp;amp; {} &amp;amp; 135.8 &amp;amp; 4.9114  \\&lt;br /&gt;
   0 &amp;amp; {} &amp;amp; 83.9 &amp;amp; 4.4299  \\&lt;br /&gt;
   1 &amp;amp; {} &amp;amp; 74.5 &amp;amp; 4.3108  \\&lt;br /&gt;
   2 &amp;amp; {} &amp;amp; 101.0 &amp;amp; 4.6154  \\&lt;br /&gt;
   3 &amp;amp; {} &amp;amp; 190.4 &amp;amp; 5.2491  \\&lt;br /&gt;
   4 &amp;amp; {} &amp;amp; 429.5 &amp;amp; 6.0627  \\&lt;br /&gt;
   5 &amp;amp; {} &amp;amp; 1057.6 &amp;amp; -6.9638  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A plot of &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for various &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values, as obtained from DOE++, is shown in the following figure. The value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; that gives the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is identified as 0.7841. The &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; value corresponding to this value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is 73.74. A 90% confidence interval on this &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value is calculated as follows. &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; can be obtained as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}^{*}}= &amp;amp; S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{t_{0.05,9}^{2}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{{{1.833}^{2}}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 101.27  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\ln S{{S}^{*}}=4.6178\,\!&amp;lt;/math&amp;gt;. The &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values corresponding to this value from the following figure are &amp;lt;math&amp;gt;-0.4686\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0052\,\!&amp;lt;/math&amp;gt;. Therefore, the 90% confidence limits on are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0055\,\!&amp;lt;/math&amp;gt;. Since the confidence limits include the value of 1, this indicates that a transformation is not required for the data in the first table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:doe6.7.png|thumb|center|400px|Box-Cox power transformation plot for the data in the first table.]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Experiments with Several Factors - Factorial Experiments==&lt;br /&gt;
&lt;br /&gt;
Experiments with two or more factors are encountered frequently. The best way to carry out such experiments is by using  factorial experiments.  Factorial experiments are experiments in which all combinations of factors are investigated in each replicate of the experiment. Factorial experiments are the only means to completely and systematically study interactions between factors in addition to identifying significant factors.  One-factor-at-a-time experiments (where each factor is investigated separately by keeping all the remaining factors constant) do not reveal the interaction effects between the factors. Further, in one-factor-at-a-time experiments full randomization is not possible.&lt;br /&gt;
&lt;br /&gt;
To illustrate factorial experiments consider an experiment where the response is investigated for two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. Assume that the response is studied at two levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;{{A}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; representing the lower level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{A}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; representing the higher level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;. Similarly, let &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; represent the two levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; that are being investigated in this experiment. Since there are two factors with two levels, a total of &amp;lt;math&amp;gt;2\times 2=4\,\!&amp;lt;/math&amp;gt; combinations exist (&amp;lt;math&amp;gt;{{A}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; - &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt;,    - &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{A}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; - &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt;,    - &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt;). Thus, four runs are required for each replicate if a factorial experiment is to be carried out in this case. Assume that the response values for each of these four possible combinations are obtained as shown in the third table.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.3.png|thumb|center|400px|Two-factor factorial experiment.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.8.png|thumb|center|400px|Interaction plot for the data in the third table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Investigating Factor Effects===&lt;br /&gt;
&lt;br /&gt;
The effect of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response can be obtained by taking the difference between the average response when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is high and the average response when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is low. The change in the response due to a change in the level of a factor is called the main effect of the factor. The main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; as per the response values in the third table is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{45+55}{2}-\frac{25+35}{2} \\ &lt;br /&gt;
= &amp;amp; 50-30 \\ &lt;br /&gt;
= &amp;amp; 20  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from the lower level to the higher level, the response increases by 20 units. A plot of the response for the two levels of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at different levels of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is shown in the figure above. The plot shows that change in the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; leads to an increase in the response by 20 units regardless of the level of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. Therefore, no interaction exists in this case as indicated by the parallel lines on the plot. The main effect of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be obtained as: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
B= &amp;amp; Average\text{ }response\text{ }at\text{ }{{B}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{B}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{35+55}{2}-\frac{25+45}{2} \\ &lt;br /&gt;
= &amp;amp; 45-35 \\ &lt;br /&gt;
= &amp;amp; 10  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Investigating Interactions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now assume that the response values for each of the four treatment combinations were obtained as shown in the fourth table. The main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; in this case is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{40+10}{2}-\frac{20+30}{2} \\ &lt;br /&gt;
= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.4.png|thumb|center|400px|Two factor factorial experiment.]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
It appears that &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; does not have an effect on the response. However, a plot of the response of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at different levels of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; shows that the response does change with the levels of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; but the effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response is dependent on the level of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (see the figure below). Therefore, an interaction between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; exists in this case (as indicated by the non-parallel lines of the figure). The interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be calculated as follows: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.9.png|thumb|center|400px|Interaction plot for the data in the fourth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
AB= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}\text{-}{{B}_{\text{high}}}\text{ }and\text{ }{{A}_{\text{low}}}\text{-}{{B}_{\text{low}}}- \\ &lt;br /&gt;
 &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}}\text{-}{{B}_{\text{high}}}\text{ }and\text{ }{{A}_{\text{high}}}\text{-}{{B}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{10+20}{2}-\frac{40+30}{2} \\ &lt;br /&gt;
= &amp;amp; -20  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that in this case, if a one-factor-at-a-time experiment were used to investigate the effect of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response, it would lead to incorrect conclusions. For example, if the response at factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was studied by holding &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; constant at its lower level, then the main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would be obtained as &amp;lt;math&amp;gt;40-20=20\,\!&amp;lt;/math&amp;gt;, indicating that the response increases by 20 units when the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from low to high. On the other hand, if the response at factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was studied by holding &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; constant at its higher level than the main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would be obtained as &amp;lt;math&amp;gt;10-30=-20\,\!&amp;lt;/math&amp;gt;, indicating that the response decreases by 20 units when the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from low to high.&lt;br /&gt;
&lt;br /&gt;
==Analysis of General Factorial Experiments==&lt;br /&gt;
&lt;br /&gt;
In DOE++, factorial experiments are referred to as &#039;&#039;factorial designs&#039;&#039;. The experiments explained in this section are referred to as g&#039;&#039;eneral factorial designs&#039;&#039;. This is done to distinguish these experiments from the other factorial designs supported by DOE++ (see the figure below). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.10.png|thumb|center|518px|Factorial experiments available in DOE++.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The other designs (such as the two level full factorial designs that are explained in [[Two_Level_Factorial_Experiments| Two Level Factorial Experiments]]) are special cases of these experiments in which factors are limited to a specified number of levels. The ANOVA model for the analysis of factorial experiments is formulated as shown next. Assume a factorial experiment in which the effect of two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, on the response is being investigated. Let there be &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{n}_{b}}\,\!&amp;lt;/math&amp;gt; levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The ANOVA model for this experiment can be stated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\tau }_{i}}+{{\delta }_{j}}+{{(\tau \delta )}_{ij}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean effect&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;j=1,2,...,{{n}_{b}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt; represents the random error terms (which are assumed to be normally distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*and the subscript &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates (&amp;lt;math&amp;gt;k=1,2,...,m\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represent deviations from the overall mean, the following constraints exist: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{j=1}{\overset{{{n}_{b}}}{\mathop \sum }}\,{{\delta }_{j}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{j=1}{\overset{{{n}_{b}}}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Tests in General Factorial Experiments===&lt;br /&gt;
These tests are used to check whether each of the factors investigated in the experiment is significant or not. For the previous example, with two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, and their interaction, &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;, the statements for the hypothesis tests can be formulated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \begin{matrix}&lt;br /&gt;
   1. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0\text{    (Main effect of }A\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\tau }_{i}}\ne 0\text{    for at least one }i \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   2. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\delta }_{1}}={{\delta }_{2}}=...={{\delta }_{{{n}_{b}}}}=0\text{    (Main effect of }B\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\delta }_{j}}\ne 0\text{    for at least one }j \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   3. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{12}}=...={{(\tau \delta )}_{{{n}_{a}}{{n}_{b}}}}=0\text{    (Interaction }AB\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{(\tau \delta )}_{ij}}\ne 0\text{    for at least one }ij  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistics for the three tests are as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::1)&amp;lt;math&amp;gt;{(F_{0})}_{A} = \frac{MS_{A}}{MS_{E}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{A}\,\!&amp;lt;/math&amp;gt; is the mean square due to factor &amp;lt;math&amp;gt;{A}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{MS_E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::2)&amp;lt;math&amp;gt;{(F_{0})_{B}} = \frac{MS_B}{MS_E}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{B}\,\!&amp;lt;/math&amp;gt; is the mean square due to factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;MS_{E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::3)&amp;lt;math&amp;gt;{(F_{0})_{AB}} = \frac{MS_{AB}}{MS_{E}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{AB}\,\!&amp;lt;/math&amp;gt; is the mean square due to interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;MS_{E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The tests are identical to the partial &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; test explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. The sum of squares for these tests (to obtain the mean squares) are calculated by splitting the model sum of squares into the extra sum of squares due to each factor. The extra sum of squares calculated for each of the factors may either be partial or sequential.  For the present example, if the extra sum of squares used is sequential, then the model sum of squares can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{TR}}=S{{S}_{A}}+S{{S}_{B}}+S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{B}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to factor    and &amp;lt;math&amp;gt;S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
The mean squares are obtained by dividing the sum of squares by the associated degrees of freedom. Once the mean squares are known the test statistics can be calculated. For example, the test statistic to test the significance of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (or the hypothesis &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;) can then be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{F}_{0}})}_{A}}= &amp;amp; \frac{M{{S}_{A}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{A}}/dof(S{{S}_{A}})}{S{{S}_{E}}/dof(S{{S}_{E}})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Similarly the test statistic to test significance of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be respectively obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{F}_{0}})}_{B}}= &amp;amp; \frac{M{{S}_{B}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{B}}/dof(S{{S}_{B}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
{{({{F}_{0}})}_{AB}}= &amp;amp; \frac{M{{S}_{AB}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{AB}}/dof(S{{S}_{AB}})}{S{{S}_{E}}/dof(S{{S}_{E}})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is recommended to conduct the test for interactions before conducting the test for the main effects. This is because, if an interaction is present, then the main effect of the factor depends on the level of the other factors and looking at the main effect is of little value. However, if the interaction is absent then the main effects become important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
Consider an experiment to investigate the effect of speed and type of fuel additive used on the mileage of a sports utility vehicle. Three speeds and two types of fuel additives are investigated. Each of the treatment combinations are replicated three times. The mileage values observed are displayed in the fifth table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.5.png|thumb|center|400px|Mileage data for different speeds and fuel additive types.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The experimental design for the data in the fifth table is shown in the figure below. In the figure, the factor Speed is represented as factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and the factor Fuel Additive is represented as factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The experimenter would like to investigate if speed, fuel additive or the interaction between speed and fuel additive affects the mileage of the sports utility vehicle. In other words, the following hypotheses need to be tested:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \begin{matrix}&lt;br /&gt;
   1. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}={{\tau }_{3}}=0\text{   (No main effect of factor }A\text{, speed)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\tau }_{i}}\ne 0\text{    for at least one }i \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   2. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\delta }_{1}}={{\delta }_{2}}={{\delta }_{3}}=0\text{    (No main effect of factor }B\text{, fuel additive)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\delta }_{j}}\ne 0\text{    for at least one }j \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   3. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{12}}=...={{(\tau \delta )}_{33}}=0\text{    (No interaction }AB\text{)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{(\tau \delta )}_{ij}}\ne 0\text{    for at least one }ij  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistics for the three tests are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::1.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{A}}=\frac{M{{S}_{A}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{A}}\,\!&amp;lt;/math&amp;gt; is the mean square for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::2.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{B}}=\frac{M{{S}_{B}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{B}}\,\!&amp;lt;/math&amp;gt; is the mean square for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::3.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{AB}}=\frac{M{{S}_{AB}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; is the mean square for interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.11.png|thumb|center|639px|Experimental design for the data in the fifth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\tau }_{i}}+{{\delta }_{j}}+{{(\tau \delta )}_{ij}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed) with &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; =1, 2, 3; &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th treatment of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (fuel additive) with &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; =1, 2; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect. In order to calculate the test statistics, it is convenient to express the ANOVA model of the equation given above in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;. This can be done as explained next.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represent deviations from the overall mean, the following constraints exist.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or    }{{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}=0\,\!&amp;lt;/math&amp;gt;.) DOE++ displays only the independent effects because only these effects are important to the analysis. The independent effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1] and A[2] respectively because these are the effects associated with factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed).&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{j=1}{\overset{2}{\mathop \sum }}\,{{\delta }_{j}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or    }{{\delta }_{1}}+{{\delta }_{2}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only one of the &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is independent, &amp;lt;math&amp;gt;{{\delta }_{2}}=-{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effect as &amp;lt;math&amp;gt;{{H}_{0}}:{{\delta }_{1}}=0\,\!&amp;lt;/math&amp;gt;.) The independent effect &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is displayed as B:B in DOE++.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{and   }\underset{j=1}{\overset{2}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{31}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{12}}+{{(\tau \delta )}_{22}}+{{(\tau \delta )}_{32}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{and   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{12}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{22}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{31}}+{{(\tau \delta )}_{32}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last five equations given above represent four constraints, as only four of these five equations are independent. Therefore, only two out of the six &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are independent, the other four effects can be expressed in terms of these effects. (The null hypothesis to test the significance of interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{21}}=0\,\!&amp;lt;/math&amp;gt;.) The effects &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are displayed as A[1]B and A[2]B respectively in DOE++.&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be obtained using indicator variables, similar to the case of the single factor experiment in [[ANOVA_for_Designed_Experiments#Fitting_ANOVA_Models|Fitting ANOVA Models]]. Since factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; has three levels, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required which need to be coded as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has two levels and can be represented using one indicator variable, &amp;lt;math&amp;gt;{{x}_{3}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\delta }_{1}}: &amp;amp; {{x}_{3}}=1 \\ &lt;br /&gt;
\text{Treatment Effect }{{\delta }_{2}}: &amp;amp; {{x}_{3}}=-1  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; interaction will be represented by all possible terms resulting from the product of the indicator variables representing factors &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. There are two such terms here - &amp;lt;math&amp;gt;{{x}_{1}}{{x}_{3}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}{{x}_{3}}\,\!&amp;lt;/math&amp;gt;. The regression version of the ANOVA model can finally be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{\tau }_{1}}\cdot {{x}_{1}}+{{\tau }_{2}}\cdot {{x}_{2}}+{{\delta }_{1}}\cdot {{x}_{3}}+{{(\tau \delta )}_{11}}\cdot {{x}_{1}}{{x}_{3}}+{{(\tau \delta )}_{21}}\cdot {{x}_{2}}{{x}_{3}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In matrix notation this model can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{111}}  \\&lt;br /&gt;
   {{Y}_{211}}  \\&lt;br /&gt;
   {{Y}_{311}}  \\&lt;br /&gt;
   {{Y}_{121}}  \\&lt;br /&gt;
   {{Y}_{221}}  \\&lt;br /&gt;
   {{Y}_{321}}  \\&lt;br /&gt;
   {{Y}_{112}}  \\&lt;br /&gt;
   {{Y}_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{323}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; -1 &amp;amp; 0 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
   {{\delta }_{1}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{11}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{21}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{111}}  \\&lt;br /&gt;
   {{\epsilon }_{211}}  \\&lt;br /&gt;
   {{\epsilon }_{311}}  \\&lt;br /&gt;
   {{\epsilon }_{121}}  \\&lt;br /&gt;
   {{\epsilon }_{221}}  \\&lt;br /&gt;
   {{\epsilon }_{321}}  \\&lt;br /&gt;
   {{\epsilon }_{112}}  \\&lt;br /&gt;
   {{\epsilon }_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{323}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The vector &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; can be substituted with the response values from the fifth table to get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{111}}  \\&lt;br /&gt;
   {{Y}_{211}}  \\&lt;br /&gt;
   {{Y}_{311}}  \\&lt;br /&gt;
   {{Y}_{121}}  \\&lt;br /&gt;
   {{Y}_{221}}  \\&lt;br /&gt;
   {{Y}_{321}}  \\&lt;br /&gt;
   {{Y}_{112}}  \\&lt;br /&gt;
   {{Y}_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{323}}  \\&lt;br /&gt;
\end{matrix} \right]=\left[ \begin{matrix}&lt;br /&gt;
   17.3  \\&lt;br /&gt;
   18.9  \\&lt;br /&gt;
   17.1  \\&lt;br /&gt;
   18.7  \\&lt;br /&gt;
   19.1  \\&lt;br /&gt;
   18.8  \\&lt;br /&gt;
   17.8  \\&lt;br /&gt;
   18.2  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18.3  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;, the sum of squares for the ANOVA model and the extra sum of squares for each of the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Sum of Squares for the Model====&lt;br /&gt;
&lt;br /&gt;
The model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, for the regression version of the ANOVA model can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 9.7311  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones. Since five effect terms (&amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;) are used in the model, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; is five (&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=5\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 10.7178  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are 18 observed response values, the number of degrees of freedom associated with the total sum of squares is 17 (&amp;lt;math&amp;gt;dof(S{{S}_{T}})=17\,\!&amp;lt;/math&amp;gt;). The error sum of squares can now be obtained:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 10.7178-9.7311 \\ &lt;br /&gt;
= &amp;amp; 0.9867  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are three replicates of the full factorial experiment, all of the error sum of squares is pure error. (This can also be seen from the preceding figure, where each treatment combination of the full factorial design is repeated three times.) The number of degrees of freedom associated with the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 17-5 \\ &lt;br /&gt;
= &amp;amp; 12  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Extra Sum of Squares for the Factors====&lt;br /&gt;
&lt;br /&gt;
The sequential sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be calculated as: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}})-S{{S}_{TR}}(\mu ) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}={{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}{{(X_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}^{\prime }{{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}})}^{-1}}X_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}^{\prime }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the matrix containing only the first three columns of the &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrix. Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-0 \\ &lt;br /&gt;
= &amp;amp; 4.5811-0 \\ &lt;br /&gt;
= &amp;amp; 4.5811  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent effects (&amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;) for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, the degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; are two (&amp;lt;math&amp;gt;dof(S{{S}_{A}})=2\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Similarly, the sum of squares for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{B}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}})-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}}) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}}}-(\frac{1}{18})J]y-{{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 9.4900-4.5811 \\ &lt;br /&gt;
= &amp;amp; 4.9089  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there is one independent effect, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{B}}\,\!&amp;lt;/math&amp;gt; is one (&amp;lt;math&amp;gt;dof(S{{S}_{B}})=1\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{AB}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}},{{(\tau \delta )}_{11}},{{(\tau \delta )}_{21}})-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}) \\ &lt;br /&gt;
= &amp;amp; S{{S}_{TR}}-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}) \\ &lt;br /&gt;
= &amp;amp; 9.7311-9.4900 \\ &lt;br /&gt;
= &amp;amp; 0.2411  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent interaction effects, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; is two (&amp;lt;math&amp;gt;dof(S{{S}_{AB}})=2\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Calculation of the Test Statistics====&lt;br /&gt;
&lt;br /&gt;
Knowing the sum of squares, the test statistic for each of the factors can be calculated. Analyzing the interaction first, the test statistic for interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{AB}}= &amp;amp; \frac{M{{S}_{AB}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{0.2411/2}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 1.47  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic, based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator, is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{AB}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.7307 \\ &lt;br /&gt;
= &amp;amp; 0.2693  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;gt; 0.1, we fail to reject &amp;lt;math&amp;gt;{{H}_{0}}:{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt; and conclude that the interaction between speed and fuel additive does not significantly affect the mileage of the sports utility vehicle. DOE++ displays this result in the ANOVA table, as shown in the following figure. In the absence of the interaction, the analysis of main effects becomes important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{A}}= &amp;amp; \frac{M{{S}_{A}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{A}}/dof(S{{S}_{A}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{4.5811/2}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 27.86  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{A}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.99997 \\ &lt;br /&gt;
= &amp;amp; 0.00003  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (or speed) has a significant effect on the mileage.&lt;br /&gt;
&lt;br /&gt;
The test statistic for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{B}}= &amp;amp; \frac{M{{S}_{B}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{4.9089/1}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 59.7  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{B}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.999995 \\ &lt;br /&gt;
= &amp;amp; 0.000005  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}:{{\delta }_{j}}=0\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or fuel additive type) has a significant effect on the mileage.&lt;br /&gt;
Therefore, it can be concluded that speed and fuel additive type affect the mileage of the vehicle significantly. The results are displayed in the ANOVA table of the following figure. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.12.png|thumb|center|645px|Analysis results for the experiment in the fifth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Effect Coefficients====&lt;br /&gt;
&lt;br /&gt;
Results for the effect coefficients of the model of the regression version of the ANOVA model are displayed in the Regression Information table in the following figure. Calculations of the results in this table are discussed next. The effect coefficients can be calculated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\hat{\beta }= &amp;amp; {{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y \\ &lt;br /&gt;
= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   18.2889  \\&lt;br /&gt;
   -0.2056  \\&lt;br /&gt;
   0.6944  \\&lt;br /&gt;
   -0.5222  \\&lt;br /&gt;
   0.0056  \\&lt;br /&gt;
   0.1389  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\hat{\mu }=18.2889\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}=-0.2056\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\hat{\tau }}_{2}}=0.6944\,\!&amp;lt;/math&amp;gt; etc. As mentioned previously, these coefficients are displayed as Intercept, A[1] and A[2] respectively depending on the name of the factor used in the experimental design. The standard error for each of these estimates is obtained using the diagonal elements of the variance-covariance matrix &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
C= &amp;amp; {{{\hat{\sigma }}}^{2}}{{({{X}^{\prime }}X)}^{-1}} \\ &lt;br /&gt;
= &amp;amp; M{{S}_{E}}\cdot {{({{X}^{\prime }}X)}^{-1}} \\ &lt;br /&gt;
= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   0.0046 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0.0091 &amp;amp; -0.0046 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; -0.0046 &amp;amp; 0.0091 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0.0046 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0.0091 &amp;amp; -0.0046  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; -0.0046 &amp;amp; 0.0091  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, the standard error for &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
se({{{\hat{\tau }}}_{1}})= &amp;amp; \sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; \sqrt{0.0091} \\ &lt;br /&gt;
= &amp;amp; 0.0956  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\hat{\tau }}}_{1}}}{se({{{\hat{\tau }}}_{1}})} \\ &lt;br /&gt;
= &amp;amp; \frac{-0.2056}{0.0956} \\ &lt;br /&gt;
= &amp;amp; -2.1506  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic is:&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Confidence intervals on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; can also be calculated. The 90% limits on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\hat{\tau }}}_{1}}\pm {{t}_{\alpha /2,n-(k+1)}}\sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; {{\tau }_{1}}\pm {{t}_{0.05,12}}\sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; -0.2056\pm 0.1704  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Thus, the 90% limits on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-0.3760\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-0.0352\,\!&amp;lt;/math&amp;gt; respectively. Results for other coefficients are obtained in a similar manner.&lt;br /&gt;
&lt;br /&gt;
===Least Squares Means===&lt;br /&gt;
The estimated mean response corresponding to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of any factor is obtained using the adjusted estimated mean which is also called the least squares mean. For example, the mean response corresponding to the first level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\mu +{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;. An estimate of this is &amp;lt;math&amp;gt;\hat{\mu }+{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; or (&amp;lt;math&amp;gt;18.2889+(-0.2056)=18.0833\,\!&amp;lt;/math&amp;gt;). Similarly, the estimated response at the third level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\hat{\mu }+{{\hat{\tau }}_{3}}\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\hat{\mu }+(-{{\hat{\tau }}_{1}}-{{\hat{\tau }}_{2}})\,\!&amp;lt;/math&amp;gt; or (&amp;lt;math&amp;gt;18.2889+(0.2056-0.6944)=17.8001\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
As in the case of single factor experiments, plots of residuals can also be used to check for model adequacy in factorial experiments. Box-Cox transformations are also available in DOE++ for factorial experiments.&lt;br /&gt;
&lt;br /&gt;
==Factorial Experiments with a Single Replicate==&lt;br /&gt;
&lt;br /&gt;
If a factorial experiment is run only for a single replicate then it is not possible to test hypotheses about the main effects and interactions as the error sum of squares cannot be obtained.  This is because the number of observations in a single replicate equals the number of terms in the ANOVA model. Hence the model fits the data perfectly and no degrees of freedom are available to obtain the error sum of squares. For example, if the two factor experiment to study the effect of speed and fuel additive type on mileage was run only as a single replicate there would be only six response values. The regression version of the ANOVA model has six terms and therefore will fit the six response values perfectly. The error sum of squares, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt;, for this case will be equal to zero. In some single replicate factorial experiments it is possible to assume that the interaction effects are negligible. In this case, the interaction mean square can be used as error mean square, &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt;, to test hypotheses about the main effects. However, such assumptions are not applicable in all cases and should be used carefully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Blocking==&lt;br /&gt;
&lt;br /&gt;
Many times a factorial experiment requires so many runs that not all of them can be completed under homogeneous conditions. This may lead to inclusion of the effects of &#039;&#039;nuisance factors&#039;&#039; into the investigation. Nuisance factors are factors that have an effect on the response but are not of primary interest to the investigator. For example, two replicates of a two factor factorial experiment require eight runs. If four runs require the duration of one day to be completed, then the total experiment will require two days to be completed. The difference in the conditions on the two days may introduce effects on the response that are not the result of the two factors being investigated. Therefore, the day is a nuisance factor for this experiment.&lt;br /&gt;
Nuisance factors can be accounted for using &#039;&#039;blocking&#039;&#039;. In blocking, experimental runs are separated based on levels of the nuisance factor. For the case of the two factor factorial experiment (where the day is a nuisance factor), separation can be made into two groups or &#039;&#039;blocks&#039;&#039;: runs that are carried out on the first day belong to block 1, and runs that are carried out on the second day belong to block 2. Thus, within each block conditions are the same with respect to the nuisance factor. As a result, each block investigates the effects of the factors of interest, while the difference in the blocks measures the effect of the nuisance factor. &lt;br /&gt;
For the example of the two factor factorial experiment, a possible assignment of runs to the blocks could be as follows: one replicate of the experiment is assigned to block 1 and the second replicate is assigned to block 2 (now each block contains all possible treatment combinations). Within each block, runs are subjected to randomization (i.e., randomization is now restricted to the runs within a block). Such a design, where each block contains one complete replicate and the treatments within a block are subjected to randomization, is called &#039;&#039;randomized complete block design&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
In summary, blocking should always be used to account for the effects of nuisance factors if it is not possible to hold the nuisance factor at a constant level through all of the experimental runs. Randomization should be used within each block to counter the effects of any unknown variability that may still be present.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
Consider the experiment of the fifth table where the mileage of a sports utility vehicle was investigated for the effects of speed and fuel additive type. Now assume that the three replicates for this experiment were carried out on three different vehicles. To ensure that the variation from one vehicle to another does not have an effect on the analysis, each vehicle is considered as one block. See the experiment design in the following figure.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.13.png|thumb|center|643px|Randomized complete block design for the experiment in the fifth table using three blocks.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purpose of the analysis, the block is considered as a main effect except that it is assumed that interactions between the block and the other main effects do not exist. Therefore, there is one block main effect (having three levels - block 1, block 2 and block 3), two main effects (speed -having three levels; and fuel additive type - having two levels) and one interaction effect (speed-fuel additive interaction) for this experiment. Let &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; represent the block effects. The hypothesis test on the block main effect checks if there is a significant variation from one vehicle to the other. The statements for the hypothesis test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\zeta }_{1}}={{\zeta }_{2}}={{\zeta }_{3}}=0\text{   (no main effect of block)} \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\zeta }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The test statistic for this test is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{Block}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{Block}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the block main effect and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. The hypothesis statements and test statistics to test the significance of factors &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed), &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (fuel additive) and the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; (speed-fuel additive interaction) can be obtained as explained in the [[ANOVA_for_Designed_Experiments#Example_2| example]]. The ANOVA model for this example can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\zeta }_{i}}+{{\tau }_{j}}+{{\delta }_{k}}+{{(\tau \delta )}_{jk}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean effect&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of the block (&amp;lt;math&amp;gt;i=1,2,3\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;j=1,2,3\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;k=1,2\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*and &amp;lt;math&amp;gt;{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt; represents the random error terms (which are assumed to be normally distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to calculate the test statistics, it is convenient to express the ANOVA model of the equation given above in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;. This can be done as explained next.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; are defined as deviations from the overall mean, the following constraints exist.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{i=1}{\overset{3}{\mathop \sum }}\,{{\zeta }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\zeta }_{1}}+{{\zeta }_{2}}+{{\zeta }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\zeta }_{3}}=-({{\zeta }_{1}}+{{\zeta }_{2}})\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of the blocks can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{\zeta }_{1}}={{\zeta }_{2}}=0\,\!&amp;lt;/math&amp;gt;.) In DOE++, the independent block effects, &amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as Block[1] and Block[2], respectively.&lt;br /&gt;
&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{j=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{j}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;. The independent effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1] and A[2], respectively.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{k=1}{\overset{2}{\mathop \sum }}\,{{\delta }_{k}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\delta }_{1}}+{{\delta }_{2}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only one of the &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; effects is independent. Assuming that &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is independent, &amp;lt;math&amp;gt;{{\delta }_{2}}=-{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;. The independent effect, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, is displayed as B:B.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{j=1}{\overset{3}{\mathop \sum }}\,{{(\tau \delta )}_{jk}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{and   }\underset{k=1}{\overset{2}{\mathop \sum }}\,{{(\tau \delta )}_{jk}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{31}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{12}}+{{(\tau \delta )}_{22}}+{{(\tau \delta )}_{32}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{and   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{12}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{21}}+{{(\tau \delta )}_{22}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{31}}+{{(\tau \delta )}_{32}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last five equations given above represent four constraints as only four of the five equations are independent. Therefore, only two out of the six &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are independent, we can express the other four effects in terms of these effects. The independent effects, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1]B and A[2]B, respectively.&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be obtained using indicator variables. Since the block has three levels, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required, which need to be coded as shown next: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Block 1}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0\text{ } \\ &lt;br /&gt;
 &amp;amp; \text{Block 2}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{         } \\ &lt;br /&gt;
 &amp;amp; \text{Block 3}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{   }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; has three levels and two indicator variables, &amp;lt;math&amp;gt;{{x}_{3}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{4}}\,\!&amp;lt;/math&amp;gt;, are required:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{3}}=1,\text{   }{{x}_{4}}=0 \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{3}}=0,\text{   }{{x}_{4}}=1\text{           } \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{3}}=-1,\text{   }{{x}_{4}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has two levels and can be represented using one indicator variable, &amp;lt;math&amp;gt;{{x}_{5}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Treatment Effect }{{\delta }_{1}}: &amp;amp; {{x}_{5}}=1 \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\delta }_{2}}: &amp;amp; {{x}_{5}}=-1  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; interaction will be represented by &amp;lt;math&amp;gt;{{x}_{3}}{{x}_{5}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{4}}{{x}_{5}}\,\!&amp;lt;/math&amp;gt;. The regression version of the ANOVA model can finally be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{\zeta }_{1}}\cdot {{x}_{1}}+{{\zeta }_{2}}\cdot {{x}_{2}}+{{\tau }_{1}}\cdot {{x}_{3}}+{{\tau }_{2}}\cdot {{x}_{4}}+{{\delta }_{1}}\cdot {{x}_{5}}+{{(\tau \delta )}_{11}}\cdot {{x}_{3}}{{x}_{5}}+{{(\tau \delta )}_{21}}\cdot {{x}_{4}}{{x}_{5}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In matrix notation this model can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:or:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   17.3  \\&lt;br /&gt;
   18.9  \\&lt;br /&gt;
   17.1  \\&lt;br /&gt;
   18.7  \\&lt;br /&gt;
   19.1  \\&lt;br /&gt;
   18.8  \\&lt;br /&gt;
   17.8  \\&lt;br /&gt;
   18.2  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18.3  \\&lt;br /&gt;
\end{matrix} \right]=\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; -1 &amp;amp; 0 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\zeta }_{1}}  \\&lt;br /&gt;
   {{\zeta }_{2}}  \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
   {{\delta }_{1}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{11}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{21}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{111}}  \\&lt;br /&gt;
   {{\epsilon }_{121}}  \\&lt;br /&gt;
   {{\epsilon }_{131}}  \\&lt;br /&gt;
   {{\epsilon }_{112}}  \\&lt;br /&gt;
   {{\epsilon }_{122}}  \\&lt;br /&gt;
   {{\epsilon }_{132}}  \\&lt;br /&gt;
   {{\epsilon }_{211}}  \\&lt;br /&gt;
   {{\epsilon }_{221}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{332}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;, the sum of squares for the ANOVA model and the extra sum of squares for each of the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Sum of Squares for the Model====&lt;br /&gt;
&lt;br /&gt;
The model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, for the ANOVA model of this example can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 9.9256  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since seven effect terms (&amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;) are used in the model the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; is seven (&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=7\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The total sum of squares can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 10.7178  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are 18 observed response values, the number of degrees of freedom associated with the total sum of squares is 17 (&amp;lt;math&amp;gt;dof(S{{S}_{T}})=17\,\!&amp;lt;/math&amp;gt;). The error sum of squares can now be obtained:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 10.7178-9.9256 \\ &lt;br /&gt;
= &amp;amp; 0.7922  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 17-7 \\ &lt;br /&gt;
= &amp;amp; 10  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are no true replicates of the treatments (as can be seen from the design of the previous figure, where all of the treatments are seen to be run just once), all of the error sum of squares is the sum of squares due to lack of fit. The lack of fit arises because the model used is not a full model since it is assumed that there are no interactions between blocks and other effects.&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Extra Sum of Squares for the Factors====&lt;br /&gt;
&lt;br /&gt;
The sequential sum of squares for the blocks can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{Block}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}})-S{{S}_{TR}}(\mu ) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y-0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones, &amp;lt;math&amp;gt;{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the hat matrix, which is calculated using &amp;lt;math&amp;gt;{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}={{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}{{(X_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}^{\prime }{{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}})}^{-1}}X_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}^{\prime }\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the matrix containing only the first three columns of the &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrix. Thus&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{Block}}= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y-0 \\ &lt;br /&gt;
= &amp;amp; 0.1944-0 \\ &lt;br /&gt;
= &amp;amp; 0.1944  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent block effects,    and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{Blocks}}\,\!&amp;lt;/math&amp;gt; is two (&amp;lt;math&amp;gt;dof(S{{S}_{Blocks}})=2\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Similarly, the sequential sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}},{{\tau }_{1}},{{\tau }_{2}})-S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}}) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}},{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-{{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 4.7756-0.1944 \\ &lt;br /&gt;
= &amp;amp; 4.5812  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sequential sum of squares for the other effects are obtained as &amp;lt;math&amp;gt;S{{S}_{B}}=4.9089\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{AB}}=0.2411\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Test Statistics====&lt;br /&gt;
&lt;br /&gt;
Knowing the sum of squares, the test statistics for each of the factors can be calculated. For example, the test statistic for the main effect of the blocks is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{Block}}= &amp;amp; \frac{M{{S}_{Block}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{Block}}/dof(S{{S}_{Blocks}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{0.1944/2}{0.7922/10} \\ &lt;br /&gt;
= &amp;amp; 1.227  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 10 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{Block}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.6663 \\ &lt;br /&gt;
= &amp;amp; 0.3337  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;gt; 0.1, we fail to reject &amp;lt;math&amp;gt;{{H}_{0}}:{{\zeta }_{i}}=0\,\!&amp;lt;/math&amp;gt; and conclude that there is no significant variation in the mileage from one vehicle to the other. Statistics to test the significance of other factors can be calculated in a similar manner. The complete analysis results obtained from DOE++ for this experiment are presented in the following figure.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.14.png|thumb|center|644px|Analysis results for the experiment in the [[ANOVA_for_Designed_Experiments#Example_3| example]].]]&lt;br /&gt;
&lt;br /&gt;
==Use of Regression to Calculate Sum of Squares==&lt;br /&gt;
&lt;br /&gt;
This section explains the reason behind the use of regression in DOE++ in all calculations related to the sum of squares. A number of textbooks present the method of direct summation to calculate the sum of squares. But this method is only applicable for balanced designs and may give incorrect results for unbalanced designs. For example, the sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; in a balanced factorial experiment with two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, is given as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{A}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,{{n}_{b}}n{{({{{\bar{y}}}_{i..}}-{{{\bar{y}}}_{...}})}^{2}} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{b}}n}-\frac{y_{...}^{2}}{{{n}_{a}}{{n}_{b}}n}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{n}_{b}}\,\!&amp;lt;/math&amp;gt; represents the levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; represents the number of samples for each combination of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The term &amp;lt;math&amp;gt;{{\bar{y}}_{i..}}\,\!&amp;lt;/math&amp;gt; is the mean value for the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{y}_{i..}}\,\!&amp;lt;/math&amp;gt; is the sum of all observations at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{y}_{...}}\,\!&amp;lt;/math&amp;gt; is the sum of all observations.&lt;br /&gt;
&lt;br /&gt;
The analogous term to calculate &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; in the case of an unbalanced design is given as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{A}}=\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\frac{y_{...}^{2}}{{{n}_{..}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{n}_{i.}}\,\!&amp;lt;/math&amp;gt; is the number of observations at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{n}_{..}}\,\!&amp;lt;/math&amp;gt; is the total number of observations. Similarly, to calculate the sum of squares for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;, the formulas are given as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{B}}= &amp;amp; \underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}-\frac{y_{...}^{2}}{{{n}_{..}}} \\ &lt;br /&gt;
 &amp;amp; S{{S}_{AB}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{ij.}^{2}}{{{n}_{ij}}}-\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}+\frac{y_{...}^{2}}{{{n}_{..}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Applying these relations to the unbalanced data of the last table, the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{AB}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{ij.}^{2}}{{{n}_{ij}}}-\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}+\frac{y_{...}^{2}}{{{n}_{..}}} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; \left( {{6}^{2}}+{{4}^{2}}+\frac{{{(42+6)}^{2}}}{2}+{{12}^{2}} \right)-\left( \frac{{{10}^{2}}}{2}+\frac{{{60}^{2}}}{3} \right) \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; -\left( \frac{{{54}^{2}}}{3}+\frac{{{16}^{2}}}{2} \right)+\frac{{{70}^{2}}}{5} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; -22  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
which is obviously incorrect since the sum of squares cannot be negative. For a detailed discussion on this refer to [[DOE References|Searle(1997, 1971)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.6.png|thumb|center|400px|Example of an unbalanced design.]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
The correct sum of squares can be calculated as shown next. The &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrices for the design of the last table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   4  \\&lt;br /&gt;
   6  \\&lt;br /&gt;
   12  \\&lt;br /&gt;
   42  \\&lt;br /&gt;
\end{matrix} \right]\text{   and   }X=\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{AB}}={{y}^{\prime }}[H-(1/5)J]y-{{y}^{\prime }}[{{H}_{\tilde{\ }AB}}-(1/5)J]y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones. The matrix &amp;lt;math&amp;gt;{{H}_{\tilde{\ }AB}}\,\!&amp;lt;/math&amp;gt; can be calculated using &amp;lt;math&amp;gt;{{H}_{\tilde{\ }AB}}={{X}_{\tilde{\ }AB}}{{(X_{\tilde{\ }AB}^{\prime }{{X}_{\tilde{\ }AB}})}^{-1}}X_{\tilde{\ }AB}^{\prime }\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;{{X}_{\tilde{\ }AB}}\,\!&amp;lt;/math&amp;gt; is the design matrix, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, excluding the last column that represents the interaction effect &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;. Thus, the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{AB}}= &amp;amp; {{y}^{\prime }}[H-(1/5)J]y-{{y}^{\prime }}[{{H}_{\tilde{\ }AB}}-(1/5)J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 368-339.4286 \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 28.5714  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the value that is calculated by DOE++ (see the first figure below, for the experiment design and the second figure below for the analysis).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.15.png|thumb|center|471px|Unbalanced experimental design for the data in the last table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.16.png|thumb|center|471px|Analysis for the unbalanced data in the last table.]]&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ANOVA_for_Designed_Experiments&amp;diff=65259</id>
		<title>ANOVA for Designed Experiments</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ANOVA_for_Designed_Experiments&amp;diff=65259"/>
		<updated>2017-08-26T02:56:40Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Box-Cox Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Doebook|5}}&lt;br /&gt;
In [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], methods were presented to model the relationship between a response and the associated factors (referred to as predictor variables in the context of regression) based on an observed data set. Such studies, where observed values of the response are used to establish an association between the response and the factors, are called &#039;&#039;observational studies&#039;&#039;. However, in the case of observational studies, it is difficult to establish a cause-and-effect relationship between the observed factors and the response. This is because a number of alternative justifications can be used to explain the observed change in the response values. For example, a regression model fitted to data on the population of cities and road accidents might show a positive regression relation. However, this relation does not imply that an increase in a city&#039;s population causes an increase in road accidents. It could be that a number of other factors such as road conditions, traffic control and the degree to which the residents of the city follow the traffic rules affect the number of road accidents in the city and the increase in the number of accidents seen in the study is caused by these factors. Since the observational study does not take the effect of these factors into account, the assumption that an increase in a city&#039;s population will lead to an increase in road accidents is not a valid one. For example, the population of a city may increase but road accidents in the city may decrease because of better traffic control. To establish a cause-and-effect relationship, the study should be conducted in such a way that the effect of all other factors is excluded from the investigation.&lt;br /&gt;
&lt;br /&gt;
The studies that enable the establishment of a cause-and-effect relationship are called &#039;&#039;experiments&#039;&#039;. In experiments the response is investigated by studying only the effect of the factor(s) of interest and excluding all other effects that may provide alternative justifications to the observed change in response. This is done in two ways. First, the levels of the factors to be investigated are carefully selected and then strictly controlled during the execution of the experiment. The aspect of selecting what factor levels should be investigated in the experiment is called the &#039;&#039;design&#039;&#039; of the experiment. The second distinguishing feature of experiments is that observations in an experiment are recorded in a random order. By doing this, it is hoped that the effect of all other factors not being investigated in the experiment will get cancelled out so that the change in the response is the result of only the investigated factors. Using these two techniques, experiments tend to ensure that alternative justifications to observed changes in the response are voided, thereby enabling the establishment of a cause-and-effect relationship between the response and the investigated factors.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Randomization&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The aspect of recording observations in an experiment in a random order is referred to as &#039;&#039;randomization&#039;&#039;. Specifically, randomization is the process of assigning the various levels of the investigated factors to the experimental units in a random fashion.  An experiment is said to be &#039;&#039;completely randomized&#039;&#039; if the probability of an experimental unit to be subjected to any level of a factor is equal for all the experimental units. The importance of randomization can be illustrated using an example. Consider an experiment where the effect of the speed of a lathe machine on the surface finish of a product is being investigated. In order to save time, the experimenter records surface finish values by running the lathe machine continuously and recording observations in the order of increasing speeds. The analysis of the experiment data shows that an increase in lathe speeds causes a decrease in the quality of surface finish. However the results of the experiment are disputed by the lathe operator who claims that he has been able to obtain better surface finish quality in the products by operating the lathe machine at higher speeds. It is later found that the faulty results were caused because of overheating of the tool used in the machine. Since the lathe was run continuously in the order of increased speeds the observations were recorded in the order of increased tool temperatures. This problem could have been avoided if the experimenter had randomized the experiment and taken reading at the various lathe speeds in a random fashion. This would require the experimenter to stop and restart the machine at every observation, thereby keeping the temperature of the tool within a reasonable range. Randomization would have ensured that the effect of heating of the machine tool is not included in the experiment.&lt;br /&gt;
&lt;br /&gt;
==Analysis of Single Factor Experiments==&lt;br /&gt;
&lt;br /&gt;
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], the analysis of observational studies involves the use of regression models. The analysis of experimental studies involves the use of analysis of variance (ANOVA) models. For a comparison of the two models see [[ANOVA_for_Designed_Experiments#Fitting_ANOVA_Models|Fitting ANOVA Models]]. In single factor experiments, ANOVA models are used to compare the mean response values at different levels of the factor. Each level of the factor is investigated to see if the response is significantly different from the response at other levels of the factor.  The analysis of single factor experiments is often referred to as &#039;&#039;one-way ANOVA&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To illustrate the use of ANOVA models in the analysis of experiments, consider a single factor experiment where the analyst wants to see if the surface finish of certain parts is affected by the speed of a lathe machine. Data is collected for three speeds (or three treatments). Each treatment is replicated four times. Therefore, this experiment design is balanced. Surface finish values recorded using randomization are shown in the following table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.1.png|thumb|center|400px|Surface finish values for three speeds of a lathe machine.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be stated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}={{\mu }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model assumes that the response at each factor level, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, is the sum of the mean response at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, and a random error term, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. The subscript &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; denotes the factor level while the subscript &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; denotes the replicate. If there are &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of the factor and &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates at each level then &amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j=1,2,...,m\,\!&amp;lt;/math&amp;gt;. The random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are assumed to be normally and independently distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. Therefore, the response at each level can be thought of as a normally distributed population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and constant variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. The equation given above is referred to as the &#039;&#039;means model&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The ANOVA model of the means model can also be written using &amp;lt;math&amp;gt;{{\mu }_{i}}=\mu +{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the effect due to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Such an ANOVA model is called the &#039;&#039;effects model&#039;&#039;.  In the effects models the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, represent the deviations from the overall mean, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;. Therefore, the following constraint exists on the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;s: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Fitting ANOVA Models===&lt;br /&gt;
&lt;br /&gt;
To fit ANOVA models and carry out hypothesis testing in single factor experiments, it is convenient to express the effects model of the effects model in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt; (that was used for multiple linear regression models in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This can be done as shown next. Using the effects model, the ANOVA model for the single factor experiment in the first table can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment effect. There are three treatments in the first table (500, 600 and 700). Therefore, there are three treatment effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{3}}\,\!&amp;lt;/math&amp;gt;. The following constraint exists for these effects:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   } {{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the first treatment, the ANOVA model for the single factor experiment in the above table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{1j}}=\mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+0\cdot {{\tau }_{3}}+{{\epsilon }_{1j}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;, the model for the first treatment is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}-0\cdot ({{\tau }_{1}}+{{\tau }_{2}})+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{or   }{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Models for the second and third treatments can be obtained in a similar way. The models for the three treatments are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{First Treatment}: &amp;amp; {{Y}_{1j}}=1\cdot \mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{Second Treatment}: &amp;amp; {{Y}_{2j}}=1\cdot \mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{2j}} \\ &lt;br /&gt;
\text{Third Treatment}: &amp;amp; {{Y}_{3j}}=1\cdot \mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{3j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coefficients of the treatment effects &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; can be expressed using two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the indicator variables &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, the ANOVA model for the data in the first table now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{x}_{1}}\cdot {{\tau }_{1}}+{{x}_{2}}\cdot {{\tau }_{2}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation can be rewritten by including subscripts &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; (for the level of the factor) and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; (for the replicate number) as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{x}_{i1}}\cdot {{\tau }_{1}}+{{x}_{i2}}\cdot {{\tau }_{2}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation given above represents the &amp;quot;regression version&amp;quot; of the ANOVA model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Treat Numerical Factors as Qualitative or Quantitative ?===&lt;br /&gt;
&lt;br /&gt;
It can be seen from the equation given above that in an ANOVA model each factor is treated as a qualitative factor. In the present example the factor, lathe speed, is a quantitative factor with three levels. But the ANOVA model treats this factor as a qualitative factor with three levels. Therefore, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required to represent this factor. &lt;br /&gt;
&lt;br /&gt;
Note that in a regression model a variable can either be treated as a quantitative or a qualitative variable. The factor, lathe speed, would be used as a quantitative factor and represented with a single predictor variable in a regression model. For example, if a first order model were to be fitted to the data in the first table, then the regression model would take the form &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. If a second order regression model were to be fitted, the regression model would be &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}x_{i1}^{2}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. Notice that unlike these regression models, the regression version of the ANOVA model does not make any assumption about the nature of relationship between the response and the factor being investigated.&lt;br /&gt;
&lt;br /&gt;
The choice of treating a particular factor as a quantitative or qualitative variable depends on the objective of the experimenter. In the case of the data of the first table, the objective of the experimenter is to compare the levels of the factor to see if change in the levels leads to a significant change in the response.  The objective is not to make predictions on the response for a given level of the factor. Therefore, the factor is treated as a qualitative factor in this case. If the objective of the experimenter were prediction or optimization, the experimenter would focus on aspects such as the nature of relationship between the factor, lathe speed, and the response, surface finish, so that the factor should be modeled as a quantitative factor to make accurate predictions.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model in the Form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be expanded for the three treatments and four replicates of the data in the first table as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{11}}= &amp;amp; 6=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{11}}\text{    Level 1, Replicate 1} \\ &lt;br /&gt;
{{Y}_{21}}= &amp;amp; 13=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{21}}\text{  Level 2, Replicate 1} \\ &lt;br /&gt;
{{Y}_{31}}= &amp;amp; 23=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{31}}\text{  Level 3, Replicate 1} \\ &lt;br /&gt;
{{Y}_{12}}= &amp;amp; 13=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{12}}\text{  Level 1, Replicate 2} \\ &lt;br /&gt;
{{Y}_{22}}= &amp;amp; 16=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{22}}\text{  Level 2, Replicate 2} \\ &lt;br /&gt;
{{Y}_{32}}= &amp;amp; 20=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{32}}\text{  Level 3, Replicate 2} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; ... \\ &lt;br /&gt;
{{Y}_{34}}= &amp;amp; 18=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{34}}\text{  Level 3, Replicate 4}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The corresponding matrix notation is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{11}}  \\&lt;br /&gt;
   {{Y}_{21}}  \\&lt;br /&gt;
   {{Y}_{31}}  \\&lt;br /&gt;
   {{Y}_{12}}  \\&lt;br /&gt;
   {{Y}_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{34}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y= &amp;amp; X\beta +\epsilon  \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   23  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   16  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The matrices &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are used in the calculation of the sum of squares in the next section. The data in the first table can be entered into DOE++ as shown in the figure below.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.1.png|thumb|center|550px|Single factor experiment design for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Test in Single Factor Experiments===&lt;br /&gt;
&lt;br /&gt;
The hypothesis test in single factor experiments examines the ANOVA model to see if the response at any level of the investigated factor is significantly different from that at the other levels. If this is not the case and the response at all levels is not significantly different, then it can be concluded that the investigated factor does not affect the response. The test on the ANOVA model is carried out by checking to see if any of the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, are non-zero. The test is similar to the test of significance of regression mentioned in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] in the context of regression models. The hypotheses statements for this test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0 \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\tau }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test for &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is carried out using the following statistic:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{TR}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the ANOVA model and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. Note that in the case of ANOVA models we use the notation &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment mean square) for the model mean square and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment sum of squares) for the model sum of squares (instead of &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression mean square, and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression sum of squares, used in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This is done to indicate that the model under consideration is the ANOVA model and not the regression model. The calculations to obtain &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; are identical to the calculations to obtain &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt; explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The sum of squares to obtain the statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt; can be calculated as explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. Using the data in the first table, the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.1667 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.1667 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.1667  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 232.1667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the number of levels of the factor, &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; represents the replicates at each level, &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; represents the vector of the response values, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; represents the matrix of ones. (For details on each of these terms, refer to [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].)&lt;br /&gt;
Since two effect terms, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are used in the regression version of the ANOVA model, the degrees of freedom associated with the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, is two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be obtained as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.9167 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.9167 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.9167  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 306.6667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;I\,\!&amp;lt;/math&amp;gt; is the identity matrix. Since there are 12 data points in all, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; is 11. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{T}})=11\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 306.6667-232.1667 \\ &lt;br /&gt;
= &amp;amp; 74.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 11-2 \\ &lt;br /&gt;
= &amp;amp; 9  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic can now be calculated using the equation given in [[ANOVA_for_Designed_Experiments#Hypothesis_Test_in_Single_Factor_Experiments|Hypothesis Test in Single Factor Experiments]] as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{0}}= &amp;amp; \frac{M{{S}_{TR}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{TR}}/dof(S{{S}_{TR}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{232.1667/2}{74.5/9} \\ &lt;br /&gt;
= &amp;amp; 14.0235  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value for the statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 9 degrees of freedom in the denominator is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{f}_{0}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.9983 \\ &lt;br /&gt;
= &amp;amp; 0.0017  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that change in the lathe speed has a significant effect on the surface finish. DOE++ displays these results in the ANOVA table, as shown in the figure below. The values of S and R-sq are the standard error and the coefficient of determination for the model, respectively. These values are explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] and indicate how well the model fits the data. The values in the figure below indicate that the fit of the ANOVA model is fair.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.2.png|thumb|center|650px|ANOVA table for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the &#039;&#039;i&#039;&#039;&amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; Treatment Mean===&lt;br /&gt;
&lt;br /&gt;
The response at each treatment of a single factor experiment can be assumed to be a normal population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt; provided that the error terms can be assumed to be normally distributed. A point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is the average response at each treatment, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;. Since this is a sample average, the associated variance is &amp;lt;math&amp;gt;{{\sigma }^{2}}/{{m}_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;{{m}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of replicates at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment. Therefore, the confidence interval on &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution. Recall from [[Statistical_Background_on_DOE| Statistical Background on DOE]] (inference on population mean when variance is unknown) that: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}/{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{M{{S}_{E}}/{{m}_{i}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
has a &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with degrees of freedom &amp;lt;math&amp;gt;=dof(S{{S}_{E}})\,\!&amp;lt;/math&amp;gt;. Therefore, a 100 (&amp;lt;math&amp;gt;1-\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment mean, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, for the first treatment of the lathe speed we have:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}= &amp;amp; {{{\bar{y}}}_{1\cdot }} \\ &lt;br /&gt;
= &amp;amp; \frac{6+13+7+8}{4} \\ &lt;br /&gt;
= &amp;amp; 8.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In DOE++, this value is displayed as the Estimated Mean for the first level, as shown in the Data Summary table in the figure below. The value displayed as the standard deviation for this level is simply the sample standard deviation calculated using the observations corresponding to this level.  The 90% confidence interval for this treatment is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{M{{S}_{E}}}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833\sqrt{\frac{(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833(1.44) \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 2.64  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}\,\!&amp;lt;/math&amp;gt; are 5.9 and 11.1, respectively. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.3.png|thumb|center|650px|Data Summary table for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the Difference in Two Treatment Means===&lt;br /&gt;
&lt;br /&gt;
The confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is used to compare two levels of the factor at a given significance. If the confidence interval does not include the value of zero, it is concluded that the two levels of the factor are significantly different. The point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt;. The variance for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})= &amp;amp; var({{{\bar{y}}}_{i\cdot }})+var({{{\bar{y}}}_{j\cdot }}) \\ &lt;br /&gt;
= &amp;amp; {{\sigma }^{2}}/{{m}_{i}}+{{\sigma }^{2}}/{{m}_{j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For balanced designs all &amp;lt;math&amp;gt;{{m}_{i}}=m\,\!&amp;lt;/math&amp;gt;. Therefore:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})=2{{\sigma }^{2}}/m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The standard deviation for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; can be obtained by taking the square root of &amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})\,\!&amp;lt;/math&amp;gt; and is referred to as the pooled standard error:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for the difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2{{{\hat{\sigma }}}^{2}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2M{{S}_{E}}/m}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then a 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, an estimate of the difference in the first and second treatment means of the lathe speed, &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}-{{{\hat{\mu }}}_{2}}= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }} \\ &lt;br /&gt;
= &amp;amp; 8.5-13.25 \\ &lt;br /&gt;
= &amp;amp; -4.75  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The pooled standard error for this difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2M{{S}_{E}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 2.0344  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To test &amp;lt;math&amp;gt;{{H}_{0}}:{{\mu }_{1}}-{{\mu }_{2}}=0\,\!&amp;lt;/math&amp;gt;, the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}-({{\mu }_{1}}-{{\mu }_{2}})}{\sqrt{2M{{S}_{E}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75-(0)}{\sqrt{\tfrac{2(74.5/9)}{4}}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75}{2.0344} \\ &lt;br /&gt;
= &amp;amp; -2.3348  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In DOE++, the value of the statistic is displayed in the Mean Comparisons table under the column T Value as shown in the figure below. The 90% confidence interval on the difference &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833\sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833(2.0344) \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 3.729  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hence the 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-8.479\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1.021\,\!&amp;lt;/math&amp;gt;, respectively. These values are displayed under the Low CI and High CI columns in the following figure. Since the confidence interval for this pair of means does not included zero, it can be concluded that these means are significantly different at 90% confidence. This conclusion can also be arrived at using the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value noting that the hypothesis is two-sided. The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to the statistic &amp;lt;math&amp;gt;{{t}_{0}}=-2.3348\,\!&amp;lt;/math&amp;gt;, based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with 9 degrees of freedom is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 2\times (1-P(T\le |{{t}_{0}}|)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-P(T\le 2.3348)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-0.9778) \\ &lt;br /&gt;
= &amp;amp; 0.0444  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, the means are significantly different at 90% confidence. Bounds on the difference between other treatment pairs can be obtained in a similar manner and it is concluded that all treatments are significantly different.&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
&lt;br /&gt;
Plots of residuals, &amp;lt;math&amp;gt;{{e}_{ij}}\,\!&amp;lt;/math&amp;gt;, similar to the ones discussed in the previous chapters on regression, are used to ensure that the assumptions associated with the ANOVA model are not violated.  The ANOVA model assumes that the random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are normally and independently distributed with the same variance for each treatment. The normality assumption can be checked by obtaining a normal probability plot of the residuals. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.4.png|thumb|center|644px|Mean Comparisons table for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equality of variance is checked by plotting residuals against the treatments and the treatment averages, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;  (also referred to as fitted values), and inspecting the spread in the residuals. If a pattern is seen in these plots, then this indicates the need to use a suitable transformation on the response that will ensure variance equality. Box-Cox transformations are discussed in the next section. To check for independence of the random error terms residuals are plotted against time or run-order to ensure that a pattern does not exist in these plots. Residual plots for the given example are shown in the following two figures. The plots show that the assumptions associated with the ANOVA model are not violated.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.5.png|thumb|center|550px|Normal probability plot of residuals for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.6.png|thumb|center|550px|Plot of residuals against fitted values for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Box-Cox Method===&lt;br /&gt;
&lt;br /&gt;
Transformations on the response may be used when residual plots for an experiment show a pattern. This indicates that the equality of variance does not hold for the residuals of the given model. The Box-Cox method can be used to automatically identify a suitable power transformation for the data based on the following relationship:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{*}}={{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is determined using the given data such that &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is minimized. The values of &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; are not used as is because of issues related to calculation or comparison of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values for different values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. For example, for &amp;lt;math&amp;gt;\lambda =0\,\!&amp;lt;/math&amp;gt; all response values will become 1. Therefore, the following relationship is used to obtain &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y^{\lambda }=\left\{ &lt;br /&gt;
\begin{array}{cc}&lt;br /&gt;
\frac{y^{\lambda }-1}{\lambda \dot{y}^{\lambda -1}} &amp;amp; \lambda \neq 0 \\ &lt;br /&gt;
\dot{y}\ln y &amp;amp; \lambda =0&lt;br /&gt;
\end{array}&lt;br /&gt;
\right.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\dot{y}=\exp \left[ \frac{1}{n}\sum\limits_{i=1}^{n}y_{i}\right]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Once all &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; values are obtained for a value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, the corresponding &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for these values is obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;. The process is repeated for a number of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values to obtain a plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is selected as the required transformation for the given data. DOE++ plots &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values because the range of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values is large and if this is not done, all values cannot be displayed on the same plot. The range of search for the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value in the software is from &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt;, because larger values of of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are usually not meaningful. DOE++ also displays a recommended transformation based on the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value obtained as shown in the table below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.2.png|thumb|center|400px|Recommended Box-Cox power transformations.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Confidence intervals on the selected &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values are also available. Let &amp;lt;math&amp;gt;S{{S}_{E}}(\lambda )\,\!&amp;lt;/math&amp;gt; be the value of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; corresponding to the selected value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then, to calculate the 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence intervals on &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, we need to calculate &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}^{*}}=S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The required limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are the two values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the value &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; (on the plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;). If the limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; do not include the value of one, then the transformation is applicable for the given data.&lt;br /&gt;
Note that the power transformations are not defined for response values that are negative or zero. DOE++ deals with negative and zero response values using the following equations (that involve addition of a suitable quantity to all of the response values if a zero or negative response value is encountered). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; y(i)= &amp;amp; y(i)+\left| {{y}_{\min }} \right|\times 1.1\text{        Negative Response} \\ &lt;br /&gt;
 &amp;amp; y(i)= &amp;amp; y(i)+1\text{                          Zero Response}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;math&amp;gt;{{y}_{\min }}\,\!&amp;lt;/math&amp;gt; represents the minimum response value and &amp;lt;math&amp;gt;\left| {{y}_{\min }} \right|\,\!&amp;lt;/math&amp;gt; represents the absolute value of the minimum response. &lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
To illustrate the Box-Cox method, consider the experiment given in the first table. Transformed response values for various values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be calculated using the equation for &amp;lt;math&amp;gt;{Y}^{\lambda}\,\!&amp;lt;/math&amp;gt; given in [[ANOVA_for_Designed_Experiments#Box-Cox_Method|Box-Cox Method]]. Knowing the hat matrix, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values corresponding to each of these &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values can easily be obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values calculated for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values between &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt; for the given data are shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \lambda  &amp;amp; {} &amp;amp; S{{S}_{E}} &amp;amp; \ln S{{S}_{E}}  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   -5 &amp;amp; {} &amp;amp; 5947.8 &amp;amp; 8.6908  \\&lt;br /&gt;
   -4 &amp;amp; {} &amp;amp; 1946.4 &amp;amp; 7.5737  \\&lt;br /&gt;
   -3 &amp;amp; {} &amp;amp; 696.5 &amp;amp; 6.5461  \\&lt;br /&gt;
   -2 &amp;amp; {} &amp;amp; 282.2 &amp;amp; 5.6425  \\&lt;br /&gt;
   -1 &amp;amp; {} &amp;amp; 135.8 &amp;amp; 4.9114  \\&lt;br /&gt;
   0 &amp;amp; {} &amp;amp; 83.9 &amp;amp; 4.4299  \\&lt;br /&gt;
   1 &amp;amp; {} &amp;amp; 74.5 &amp;amp; 4.3108  \\&lt;br /&gt;
   2 &amp;amp; {} &amp;amp; 101.0 &amp;amp; 4.6154  \\&lt;br /&gt;
   3 &amp;amp; {} &amp;amp; 190.4 &amp;amp; 5.2491  \\&lt;br /&gt;
   4 &amp;amp; {} &amp;amp; 429.5 &amp;amp; 6.0627  \\&lt;br /&gt;
   5 &amp;amp; {} &amp;amp; 1057.6 &amp;amp; -6.9638  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A plot of &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for various &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values, as obtained from DOE++, is shown in the following figure. The value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; that gives the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is identified as 0.7841. The &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; value corresponding to this value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is 73.74. A 90% confidence interval on this &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value is calculated as follows. &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; can be obtained as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}^{*}}= &amp;amp; S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{t_{0.05,9}^{2}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{{{1.833}^{2}}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 101.27  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\ln S{{S}^{*}}=4.6178\,\!&amp;lt;/math&amp;gt;. The &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values corresponding to this value from the following figure are &amp;lt;math&amp;gt;-0.4686\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0052\,\!&amp;lt;/math&amp;gt;. Therefore, the 90% confidence limits on are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0055\,\!&amp;lt;/math&amp;gt;. Since the confidence limits include the value of 1, this indicates that a transformation is not required for the data in the first table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:doe6.7.png|thumb|center|400px|Box-Cox power transformation plot for the data in the first table.]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Experiments with Several Factors - Factorial Experiments==&lt;br /&gt;
&lt;br /&gt;
Experiments with two or more factors are encountered frequently. The best way to carry out such experiments is by using  factorial experiments.  Factorial experiments are experiments in which all combinations of factors are investigated in each replicate of the experiment. Factorial experiments are the only means to completely and systematically study interactions between factors in addition to identifying significant factors.  One-factor-at-a-time experiments (where each factor is investigated separately by keeping all the remaining factors constant) do not reveal the interaction effects between the factors. Further, in one-factor-at-a-time experiments full randomization is not possible.&lt;br /&gt;
&lt;br /&gt;
To illustrate factorial experiments consider an experiment where the response is investigated for two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. Assume that the response is studied at two levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;{{A}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; representing the lower level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{A}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; representing the higher level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;. Similarly, let &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; represent the two levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; that are being investigated in this experiment. Since there are two factors with two levels, a total of &amp;lt;math&amp;gt;2\times 2=4\,\!&amp;lt;/math&amp;gt; combinations exist (&amp;lt;math&amp;gt;{{A}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; - &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt;,    - &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{A}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; - &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt;,    - &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt;). Thus, four runs are required for each replicate if a factorial experiment is to be carried out in this case. Assume that the response values for each of these four possible combinations are obtained as shown in the third table.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.3.png|thumb|center|400px|Two-factor factorial experiment.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.8.png|thumb|center|400px|Interaction plot for the data in the third table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Investigating Factor Effects===&lt;br /&gt;
&lt;br /&gt;
The effect of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response can be obtained by taking the difference between the average response when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is high and the average response when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is low. The change in the response due to a change in the level of a factor is called the main effect of the factor. The main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; as per the response values in the third table is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{45+55}{2}-\frac{25+35}{2} \\ &lt;br /&gt;
= &amp;amp; 50-30 \\ &lt;br /&gt;
= &amp;amp; 20  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from the lower level to the higher level, the response increases by 20 units. A plot of the response for the two levels of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at different levels of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is shown in the figure above. The plot shows that change in the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; leads to an increase in the response by 20 units regardless of the level of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. Therefore, no interaction exists in this case as indicated by the parallel lines on the plot. The main effect of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be obtained as: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
B= &amp;amp; Average\text{ }response\text{ }at\text{ }{{B}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{B}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{35+55}{2}-\frac{25+45}{2} \\ &lt;br /&gt;
= &amp;amp; 45-35 \\ &lt;br /&gt;
= &amp;amp; 10  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Investigating Interactions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now assume that the response values for each of the four treatment combinations were obtained as shown in the fourth table. The main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; in this case is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{40+10}{2}-\frac{20+30}{2} \\ &lt;br /&gt;
= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.4.png|thumb|center|400px|Two factor factorial experiment.]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
It appears that &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; does not have an effect on the response. However, a plot of the response of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at different levels of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; shows that the response does change with the levels of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; but the effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response is dependent on the level of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (see the figure below). Therefore, an interaction between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; exists in this case (as indicated by the non-parallel lines of the figure). The interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be calculated as follows: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.9.png|thumb|center|400px|Interaction plot for the data in the fourth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
AB= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}\text{-}{{B}_{\text{high}}}\text{ }and\text{ }{{A}_{\text{low}}}\text{-}{{B}_{\text{low}}}- \\ &lt;br /&gt;
 &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}}\text{-}{{B}_{\text{high}}}\text{ }and\text{ }{{A}_{\text{high}}}\text{-}{{B}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{10+20}{2}-\frac{40+30}{2} \\ &lt;br /&gt;
= &amp;amp; -20  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that in this case, if a one-factor-at-a-time experiment were used to investigate the effect of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response, it would lead to incorrect conclusions. For example, if the response at factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was studied by holding &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; constant at its lower level, then the main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would be obtained as &amp;lt;math&amp;gt;40-20=20\,\!&amp;lt;/math&amp;gt;, indicating that the response increases by 20 units when the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from low to high. On the other hand, if the response at factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was studied by holding &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; constant at its higher level than the main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would be obtained as &amp;lt;math&amp;gt;10-30=-20\,\!&amp;lt;/math&amp;gt;, indicating that the response decreases by 20 units when the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from low to high.&lt;br /&gt;
&lt;br /&gt;
==Analysis of General Factorial Experiments==&lt;br /&gt;
&lt;br /&gt;
In DOE++, factorial experiments are referred to as &#039;&#039;factorial designs&#039;&#039;. The experiments explained in this section are referred to as g&#039;&#039;eneral factorial designs&#039;&#039;. This is done to distinguish these experiments from the other factorial designs supported by DOE++ (see the figure below). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.10.png|thumb|center|518px|Factorial experiments available in DOE++.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The other designs (such as the two level full factorial designs that are explained in [[Two_Level_Factorial_Experiments| Two Level Factorial Experiments]]) are special cases of these experiments in which factors are limited to a specified number of levels. The ANOVA model for the analysis of factorial experiments is formulated as shown next. Assume a factorial experiment in which the effect of two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, on the response is being investigated. Let there be &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{n}_{b}}\,\!&amp;lt;/math&amp;gt; levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The ANOVA model for this experiment can be stated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\tau }_{i}}+{{\delta }_{j}}+{{(\tau \delta )}_{ij}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean effect&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;j=1,2,...,{{n}_{b}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt; represents the random error terms (which are assumed to be normally distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*and the subscript &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates (&amp;lt;math&amp;gt;k=1,2,...,m\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represent deviations from the overall mean, the following constraints exist: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{j=1}{\overset{{{n}_{b}}}{\mathop \sum }}\,{{\delta }_{j}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{j=1}{\overset{{{n}_{b}}}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Tests in General Factorial Experiments===&lt;br /&gt;
These tests are used to check whether each of the factors investigated in the experiment is significant or not. For the previous example, with two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, and their interaction, &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;, the statements for the hypothesis tests can be formulated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \begin{matrix}&lt;br /&gt;
   1. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0\text{    (Main effect of }A\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\tau }_{i}}\ne 0\text{    for at least one }i \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   2. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\delta }_{1}}={{\delta }_{2}}=...={{\delta }_{{{n}_{b}}}}=0\text{    (Main effect of }B\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\delta }_{j}}\ne 0\text{    for at least one }j \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   3. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{12}}=...={{(\tau \delta )}_{{{n}_{a}}{{n}_{b}}}}=0\text{    (Interaction }AB\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{(\tau \delta )}_{ij}}\ne 0\text{    for at least one }ij  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistics for the three tests are as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::1)&amp;lt;math&amp;gt;{(F_{0})}_{A} = \frac{MS_{A}}{MS_{E}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{A}\,\!&amp;lt;/math&amp;gt; is the mean square due to factor &amp;lt;math&amp;gt;{A}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{MS_E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::2)&amp;lt;math&amp;gt;{(F_{0})_{B}} = \frac{MS_B}{MS_E}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{B}\,\!&amp;lt;/math&amp;gt; is the mean square due to factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;MS_{E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::3)&amp;lt;math&amp;gt;{(F_{0})_{AB}} = \frac{MS_{AB}}{MS_{E}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{AB}\,\!&amp;lt;/math&amp;gt; is the mean square due to interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;MS_{E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The tests are identical to the partial &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; test explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. The sum of squares for these tests (to obtain the mean squares) are calculated by splitting the model sum of squares into the extra sum of squares due to each factor. The extra sum of squares calculated for each of the factors may either be partial or sequential.  For the present example, if the extra sum of squares used is sequential, then the model sum of squares can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{TR}}=S{{S}_{A}}+S{{S}_{B}}+S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{B}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to factor    and &amp;lt;math&amp;gt;S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
The mean squares are obtained by dividing the sum of squares by the associated degrees of freedom. Once the mean squares are known the test statistics can be calculated. For example, the test statistic to test the significance of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (or the hypothesis &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;) can then be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{F}_{0}})}_{A}}= &amp;amp; \frac{M{{S}_{A}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{A}}/dof(S{{S}_{A}})}{S{{S}_{E}}/dof(S{{S}_{E}})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Similarly the test statistic to test significance of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be respectively obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{F}_{0}})}_{B}}= &amp;amp; \frac{M{{S}_{B}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{B}}/dof(S{{S}_{B}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
{{({{F}_{0}})}_{AB}}= &amp;amp; \frac{M{{S}_{AB}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{AB}}/dof(S{{S}_{AB}})}{S{{S}_{E}}/dof(S{{S}_{E}})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is recommended to conduct the test for interactions before conducting the test for the main effects. This is because, if an interaction is present, then the main effect of the factor depends on the level of the other factors and looking at the main effect is of little value. However, if the interaction is absent then the main effects become important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
Consider an experiment to investigate the effect of speed and type of fuel additive used on the mileage of a sports utility vehicle. Three speeds and two types of fuel additives are investigated. Each of the treatment combinations are replicated three times. The mileage values observed are displayed in the fifth table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.5.png|thumb|center|400px|Mileage data for different speeds and fuel additive types.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The experimental design for the data in the fifth table is shown in the figure below. In the figure, the factor Speed is represented as factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and the factor Fuel Additive is represented as factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The experimenter would like to investigate if speed, fuel additive or the interaction between speed and fuel additive affects the mileage of the sports utility vehicle. In other words, the following hypotheses need to be tested:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \begin{matrix}&lt;br /&gt;
   1. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}={{\tau }_{3}}=0\text{   (No main effect of factor }A\text{, speed)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\tau }_{i}}\ne 0\text{    for at least one }i \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   2. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\delta }_{1}}={{\delta }_{2}}={{\delta }_{3}}=0\text{    (No main effect of factor }B\text{, fuel additive)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\delta }_{j}}\ne 0\text{    for at least one }j \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   3. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{12}}=...={{(\tau \delta )}_{33}}=0\text{    (No interaction }AB\text{)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{(\tau \delta )}_{ij}}\ne 0\text{    for at least one }ij  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistics for the three tests are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::1.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{A}}=\frac{M{{S}_{A}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{A}}\,\!&amp;lt;/math&amp;gt; is the mean square for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::2.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{B}}=\frac{M{{S}_{B}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{B}}\,\!&amp;lt;/math&amp;gt; is the mean square for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::3.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{AB}}=\frac{M{{S}_{AB}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; is the mean square for interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.11.png|thumb|center|639px|Experimental design for the data in the fifth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\tau }_{i}}+{{\delta }_{j}}+{{(\tau \delta )}_{ij}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed) with &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; =1, 2, 3; &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th treatment of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (fuel additive) with &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; =1, 2; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect. In order to calculate the test statistics, it is convenient to express the ANOVA model of the equation given above in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;. This can be done as explained next.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represent deviations from the overall mean, the following constraints exist.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or    }{{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}=0\,\!&amp;lt;/math&amp;gt;.) DOE++ displays only the independent effects because only these effects are important to the analysis. The independent effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1] and A[2] respectively because these are the effects associated with factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed).&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{j=1}{\overset{2}{\mathop \sum }}\,{{\delta }_{j}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or    }{{\delta }_{1}}+{{\delta }_{2}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only one of the &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is independent, &amp;lt;math&amp;gt;{{\delta }_{2}}=-{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effect as &amp;lt;math&amp;gt;{{H}_{0}}:{{\delta }_{1}}=0\,\!&amp;lt;/math&amp;gt;.) The independent effect &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is displayed as B:B in DOE++.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{and   }\underset{j=1}{\overset{2}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{31}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{12}}+{{(\tau \delta )}_{22}}+{{(\tau \delta )}_{32}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{and   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{12}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{22}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{31}}+{{(\tau \delta )}_{32}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last five equations given above represent four constraints, as only four of these five equations are independent. Therefore, only two out of the six &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are independent, the other four effects can be expressed in terms of these effects. (The null hypothesis to test the significance of interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{21}}=0\,\!&amp;lt;/math&amp;gt;.) The effects &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are displayed as A[1]B and A[2]B respectively in DOE++.&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be obtained using indicator variables, similar to the case of the single factor experiment in [[ANOVA_for_Designed_Experiments#Fitting_ANOVA_Models|Fitting ANOVA Models]]. Since factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; has three levels, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required which need to be coded as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has two levels and can be represented using one indicator variable, &amp;lt;math&amp;gt;{{x}_{3}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\delta }_{1}}: &amp;amp; {{x}_{3}}=1 \\ &lt;br /&gt;
\text{Treatment Effect }{{\delta }_{2}}: &amp;amp; {{x}_{3}}=-1  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; interaction will be represented by all possible terms resulting from the product of the indicator variables representing factors &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. There are two such terms here - &amp;lt;math&amp;gt;{{x}_{1}}{{x}_{3}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}{{x}_{3}}\,\!&amp;lt;/math&amp;gt;. The regression version of the ANOVA model can finally be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{\tau }_{1}}\cdot {{x}_{1}}+{{\tau }_{2}}\cdot {{x}_{2}}+{{\delta }_{1}}\cdot {{x}_{3}}+{{(\tau \delta )}_{11}}\cdot {{x}_{1}}{{x}_{3}}+{{(\tau \delta )}_{21}}\cdot {{x}_{2}}{{x}_{3}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In matrix notation this model can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{111}}  \\&lt;br /&gt;
   {{Y}_{211}}  \\&lt;br /&gt;
   {{Y}_{311}}  \\&lt;br /&gt;
   {{Y}_{121}}  \\&lt;br /&gt;
   {{Y}_{221}}  \\&lt;br /&gt;
   {{Y}_{321}}  \\&lt;br /&gt;
   {{Y}_{112}}  \\&lt;br /&gt;
   {{Y}_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{323}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; -1 &amp;amp; 0 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
   {{\delta }_{1}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{11}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{21}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{111}}  \\&lt;br /&gt;
   {{\epsilon }_{211}}  \\&lt;br /&gt;
   {{\epsilon }_{311}}  \\&lt;br /&gt;
   {{\epsilon }_{121}}  \\&lt;br /&gt;
   {{\epsilon }_{221}}  \\&lt;br /&gt;
   {{\epsilon }_{321}}  \\&lt;br /&gt;
   {{\epsilon }_{112}}  \\&lt;br /&gt;
   {{\epsilon }_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{323}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The vector &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; can be substituted with the response values from the fifth table to get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{111}}  \\&lt;br /&gt;
   {{Y}_{211}}  \\&lt;br /&gt;
   {{Y}_{311}}  \\&lt;br /&gt;
   {{Y}_{121}}  \\&lt;br /&gt;
   {{Y}_{221}}  \\&lt;br /&gt;
   {{Y}_{321}}  \\&lt;br /&gt;
   {{Y}_{112}}  \\&lt;br /&gt;
   {{Y}_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{323}}  \\&lt;br /&gt;
\end{matrix} \right]=\left[ \begin{matrix}&lt;br /&gt;
   17.3  \\&lt;br /&gt;
   18.9  \\&lt;br /&gt;
   17.1  \\&lt;br /&gt;
   18.7  \\&lt;br /&gt;
   19.1  \\&lt;br /&gt;
   18.8  \\&lt;br /&gt;
   17.8  \\&lt;br /&gt;
   18.2  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18.3  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;, the sum of squares for the ANOVA model and the extra sum of squares for each of the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Sum of Squares for the Model====&lt;br /&gt;
&lt;br /&gt;
The model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, for the regression version of the ANOVA model can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 9.7311  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones. Since five effect terms (&amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;) are used in the model, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; is five (&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=5\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 10.7178  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are 18 observed response values, the number of degrees of freedom associated with the total sum of squares is 17 (&amp;lt;math&amp;gt;dof(S{{S}_{T}})=17\,\!&amp;lt;/math&amp;gt;). The error sum of squares can now be obtained:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 10.7178-9.7311 \\ &lt;br /&gt;
= &amp;amp; 0.9867  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are three replicates of the full factorial experiment, all of the error sum of squares is pure error. (This can also be seen from the preceding figure, where each treatment combination of the full factorial design is repeated three times.) The number of degrees of freedom associated with the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 17-5 \\ &lt;br /&gt;
= &amp;amp; 12  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Extra Sum of Squares for the Factors====&lt;br /&gt;
&lt;br /&gt;
The sequential sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be calculated as: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}})-S{{S}_{TR}}(\mu ) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}={{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}{{(X_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}^{\prime }{{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}})}^{-1}}X_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}^{\prime }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the matrix containing only the first three columns of the &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrix. Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-0 \\ &lt;br /&gt;
= &amp;amp; 4.5811-0 \\ &lt;br /&gt;
= &amp;amp; 4.5811  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent effects (&amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;) for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, the degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; are two (&amp;lt;math&amp;gt;dof(S{{S}_{A}})=2\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Similarly, the sum of squares for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{B}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}})-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}}) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}}}-(\frac{1}{18})J]y-{{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 9.4900-4.5811 \\ &lt;br /&gt;
= &amp;amp; 4.9089  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there is one independent effect, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{B}}\,\!&amp;lt;/math&amp;gt; is one (&amp;lt;math&amp;gt;dof(S{{S}_{B}})=1\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{AB}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}},{{(\tau \delta )}_{11}},{{(\tau \delta )}_{21}})-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}) \\ &lt;br /&gt;
= &amp;amp; S{{S}_{TR}}-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}) \\ &lt;br /&gt;
= &amp;amp; 9.7311-9.4900 \\ &lt;br /&gt;
= &amp;amp; 0.2411  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent interaction effects, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; is two (&amp;lt;math&amp;gt;dof(S{{S}_{AB}})=2\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Calculation of the Test Statistics====&lt;br /&gt;
&lt;br /&gt;
Knowing the sum of squares, the test statistic for each of the factors can be calculated. Analyzing the interaction first, the test statistic for interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{AB}}= &amp;amp; \frac{M{{S}_{AB}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{0.2411/2}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 1.47  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic, based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator, is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{AB}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.7307 \\ &lt;br /&gt;
= &amp;amp; 0.2693  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;gt; 0.1, we fail to reject &amp;lt;math&amp;gt;{{H}_{0}}:{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt; and conclude that the interaction between speed and fuel additive does not significantly affect the mileage of the sports utility vehicle. DOE++ displays this result in the ANOVA table, as shown in the following figure. In the absence of the interaction, the analysis of main effects becomes important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{A}}= &amp;amp; \frac{M{{S}_{A}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{A}}/dof(S{{S}_{A}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{4.5811/2}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 27.86  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{A}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.99997 \\ &lt;br /&gt;
= &amp;amp; 0.00003  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (or speed) has a significant effect on the mileage.&lt;br /&gt;
&lt;br /&gt;
The test statistic for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{B}}= &amp;amp; \frac{M{{S}_{B}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{4.9089/1}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 59.7  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{B}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.999995 \\ &lt;br /&gt;
= &amp;amp; 0.000005  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}:{{\delta }_{j}}=0\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or fuel additive type) has a significant effect on the mileage.&lt;br /&gt;
Therefore, it can be concluded that speed and fuel additive type affect the mileage of the vehicle significantly. The results are displayed in the ANOVA table of the following figure. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.12.png|thumb|center|645px|Analysis results for the experiment in the fifth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Effect Coefficients====&lt;br /&gt;
&lt;br /&gt;
Results for the effect coefficients of the model of the regression version of the ANOVA model are displayed in the Regression Information table in the following figure. Calculations of the results in this table are discussed next. The effect coefficients can be calculated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\hat{\beta }= &amp;amp; {{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y \\ &lt;br /&gt;
= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   18.2889  \\&lt;br /&gt;
   -0.2056  \\&lt;br /&gt;
   0.6944  \\&lt;br /&gt;
   -0.5222  \\&lt;br /&gt;
   0.0056  \\&lt;br /&gt;
   0.1389  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\hat{\mu }=18.2889\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}=-0.2056\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\hat{\tau }}_{2}}=0.6944\,\!&amp;lt;/math&amp;gt; etc. As mentioned previously, these coefficients are displayed as Intercept, A[1] and A[2] respectively depending on the name of the factor used in the experimental design. The standard error for each of these estimates is obtained using the diagonal elements of the variance-covariance matrix &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
C= &amp;amp; {{{\hat{\sigma }}}^{2}}{{({{X}^{\prime }}X)}^{-1}} \\ &lt;br /&gt;
= &amp;amp; M{{S}_{E}}\cdot {{({{X}^{\prime }}X)}^{-1}} \\ &lt;br /&gt;
= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   0.0046 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0.0091 &amp;amp; -0.0046 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; -0.0046 &amp;amp; 0.0091 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0.0046 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0.0091 &amp;amp; -0.0046  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; -0.0046 &amp;amp; 0.0091  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, the standard error for &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
se({{{\hat{\tau }}}_{1}})= &amp;amp; \sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; \sqrt{0.0091} \\ &lt;br /&gt;
= &amp;amp; 0.0956  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\hat{\tau }}}_{1}}}{se({{{\hat{\tau }}}_{1}})} \\ &lt;br /&gt;
= &amp;amp; \frac{-0.2056}{0.0956} \\ &lt;br /&gt;
= &amp;amp; -2.1506  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic is:&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Confidence intervals on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; can also be calculated. The 90% limits on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\hat{\tau }}}_{1}}\pm {{t}_{\alpha /2,n-(k+1)}}\sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; {{\tau }_{1}}\pm {{t}_{0.05,12}}\sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; -0.2056\pm 0.1704  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Thus, the 90% limits on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-0.3760\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-0.0352\,\!&amp;lt;/math&amp;gt; respectively. Results for other coefficients are obtained in a similar manner.&lt;br /&gt;
&lt;br /&gt;
===Least Squares Means===&lt;br /&gt;
The estimated mean response corresponding to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of any factor is obtained using the adjusted estimated mean which is also called the least squares mean. For example, the mean response corresponding to the first level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\mu +{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;. An estimate of this is &amp;lt;math&amp;gt;\hat{\mu }+{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; or (&amp;lt;math&amp;gt;18.2889+(-0.2056)=18.0833\,\!&amp;lt;/math&amp;gt;). Similarly, the estimated response at the third level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\hat{\mu }+{{\hat{\tau }}_{3}}\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\hat{\mu }+(-{{\hat{\tau }}_{1}}-{{\hat{\tau }}_{2}})\,\!&amp;lt;/math&amp;gt; or (&amp;lt;math&amp;gt;18.2889+(0.2056-0.6944)=17.8001\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
As in the case of single factor experiments, plots of residuals can also be used to check for model adequacy in factorial experiments. Box-Cox transformations are also available in DOE++ for factorial experiments.&lt;br /&gt;
&lt;br /&gt;
==Factorial Experiments with a Single Replicate==&lt;br /&gt;
&lt;br /&gt;
If a factorial experiment is run only for a single replicate then it is not possible to test hypotheses about the main effects and interactions as the error sum of squares cannot be obtained.  This is because the number of observations in a single replicate equals the number of terms in the ANOVA model. Hence the model fits the data perfectly and no degrees of freedom are available to obtain the error sum of squares. For example, if the two factor experiment to study the effect of speed and fuel additive type on mileage was run only as a single replicate there would be only six response values. The regression version of the ANOVA model has six terms and therefore will fit the six response values perfectly. The error sum of squares, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt;, for this case will be equal to zero. In some single replicate factorial experiments it is possible to assume that the interaction effects are negligible. In this case, the interaction mean square can be used as error mean square, &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt;, to test hypotheses about the main effects. However, such assumptions are not applicable in all cases and should be used carefully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Blocking==&lt;br /&gt;
&lt;br /&gt;
Many times a factorial experiment requires so many runs that not all of them can be completed under homogeneous conditions. This may lead to inclusion of the effects of &#039;&#039;nuisance factors&#039;&#039; into the investigation. Nuisance factors are factors that have an effect on the response but are not of primary interest to the investigator. For example, two replicates of a two factor factorial experiment require eight runs. If four runs require the duration of one day to be completed, then the total experiment will require two days to be completed. The difference in the conditions on the two days may introduce effects on the response that are not the result of the two factors being investigated. Therefore, the day is a nuisance factor for this experiment.&lt;br /&gt;
Nuisance factors can be accounted for using &#039;&#039;blocking&#039;&#039;. In blocking, experimental runs are separated based on levels of the nuisance factor. For the case of the two factor factorial experiment (where the day is a nuisance factor), separation can be made into two groups or &#039;&#039;blocks&#039;&#039;: runs that are carried out on the first day belong to block 1, and runs that are carried out on the second day belong to block 2. Thus, within each block conditions are the same with respect to the nuisance factor. As a result, each block investigates the effects of the factors of interest, while the difference in the blocks measures the effect of the nuisance factor. &lt;br /&gt;
For the example of the two factor factorial experiment, a possible assignment of runs to the blocks could be as follows: one replicate of the experiment is assigned to block 1 and the second replicate is assigned to block 2 (now each block contains all possible treatment combinations). Within each block, runs are subjected to randomization (i.e., randomization is now restricted to the runs within a block). Such a design, where each block contains one complete replicate and the treatments within a block are subjected to randomization, is called &#039;&#039;randomized complete block design&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
In summary, blocking should always be used to account for the effects of nuisance factors if it is not possible to hold the nuisance factor at a constant level through all of the experimental runs. Randomization should be used within each block to counter the effects of any unknown variability that may still be present.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
Consider the experiment of the fifth table where the mileage of a sports utility vehicle was investigated for the effects of speed and fuel additive type. Now assume that the three replicates for this experiment were carried out on three different vehicles. To ensure that the variation from one vehicle to another does not have an effect on the analysis, each vehicle is considered as one block. See the experiment design in the following figure.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.13.png|thumb|center|643px|Randomized complete block design for the experiment in the fifth table using three blocks.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purpose of the analysis, the block is considered as a main effect except that it is assumed that interactions between the block and the other main effects do not exist. Therefore, there is one block main effect (having three levels - block 1, block 2 and block 3), two main effects (speed -having three levels; and fuel additive type - having two levels) and one interaction effect (speed-fuel additive interaction) for this experiment. Let &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; represent the block effects. The hypothesis test on the block main effect checks if there is a significant variation from one vehicle to the other. The statements for the hypothesis test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\zeta }_{1}}={{\zeta }_{2}}={{\zeta }_{3}}=0\text{   (no main effect of block)} \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\zeta }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The test statistic for this test is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{Block}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{Block}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the block main effect and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. The hypothesis statements and test statistics to test the significance of factors &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed), &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (fuel additive) and the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; (speed-fuel additive interaction) can be obtained as explained in the [[ANOVA_for_Designed_Experiments#Example_2| example]]. The ANOVA model for this example can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\zeta }_{i}}+{{\tau }_{j}}+{{\delta }_{k}}+{{(\tau \delta )}_{jk}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean effect&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of the block (&amp;lt;math&amp;gt;i=1,2,3\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;j=1,2,3\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;k=1,2\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*and &amp;lt;math&amp;gt;{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt; represents the random error terms (which are assumed to be normally distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to calculate the test statistics, it is convenient to express the ANOVA model of the equation given above in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;. This can be done as explained next.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; are defined as deviations from the overall mean, the following constraints exist.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{i=1}{\overset{3}{\mathop \sum }}\,{{\zeta }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\zeta }_{1}}+{{\zeta }_{2}}+{{\zeta }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\zeta }_{3}}=-({{\zeta }_{1}}+{{\zeta }_{2}})\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of the blocks can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{\zeta }_{1}}={{\zeta }_{2}}=0\,\!&amp;lt;/math&amp;gt;.) In DOE++, the independent block effects, &amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as Block[1] and Block[2], respectively.&lt;br /&gt;
&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{j=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{j}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;. The independent effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1] and A[2], respectively.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{k=1}{\overset{2}{\mathop \sum }}\,{{\delta }_{k}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\delta }_{1}}+{{\delta }_{2}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only one of the &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; effects is independent. Assuming that &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is independent, &amp;lt;math&amp;gt;{{\delta }_{2}}=-{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;. The independent effect, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, is displayed as B:B.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{j=1}{\overset{3}{\mathop \sum }}\,{{(\tau \delta )}_{jk}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{and   }\underset{k=1}{\overset{2}{\mathop \sum }}\,{{(\tau \delta )}_{jk}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{31}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{12}}+{{(\tau \delta )}_{22}}+{{(\tau \delta )}_{32}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{and   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{12}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{21}}+{{(\tau \delta )}_{22}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{31}}+{{(\tau \delta )}_{32}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last five equations given above represent four constraints as only four of the five equations are independent. Therefore, only two out of the six &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are independent, we can express the other four effects in terms of these effects. The independent effects, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1]B and A[2]B, respectively.&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be obtained using indicator variables. Since the block has three levels, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required, which need to be coded as shown next: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Block 1}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0\text{ } \\ &lt;br /&gt;
 &amp;amp; \text{Block 2}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{         } \\ &lt;br /&gt;
 &amp;amp; \text{Block 3}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{   }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; has three levels and two indicator variables, &amp;lt;math&amp;gt;{{x}_{3}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{4}}\,\!&amp;lt;/math&amp;gt;, are required:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{3}}=1,\text{   }{{x}_{4}}=0 \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{3}}=0,\text{   }{{x}_{4}}=1\text{           } \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{3}}=-1,\text{   }{{x}_{4}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has two levels and can be represented using one indicator variable, &amp;lt;math&amp;gt;{{x}_{5}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Treatment Effect }{{\delta }_{1}}: &amp;amp; {{x}_{5}}=1 \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\delta }_{2}}: &amp;amp; {{x}_{5}}=-1  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; interaction will be represented by &amp;lt;math&amp;gt;{{x}_{3}}{{x}_{5}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{4}}{{x}_{5}}\,\!&amp;lt;/math&amp;gt;. The regression version of the ANOVA model can finally be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{\zeta }_{1}}\cdot {{x}_{1}}+{{\zeta }_{2}}\cdot {{x}_{2}}+{{\tau }_{1}}\cdot {{x}_{3}}+{{\tau }_{2}}\cdot {{x}_{4}}+{{\delta }_{1}}\cdot {{x}_{5}}+{{(\tau \delta )}_{11}}\cdot {{x}_{3}}{{x}_{5}}+{{(\tau \delta )}_{21}}\cdot {{x}_{4}}{{x}_{5}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In matrix notation this model can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:or:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   17.3  \\&lt;br /&gt;
   18.9  \\&lt;br /&gt;
   17.1  \\&lt;br /&gt;
   18.7  \\&lt;br /&gt;
   19.1  \\&lt;br /&gt;
   18.8  \\&lt;br /&gt;
   17.8  \\&lt;br /&gt;
   18.2  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18.3  \\&lt;br /&gt;
\end{matrix} \right]=\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; -1 &amp;amp; 0 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\zeta }_{1}}  \\&lt;br /&gt;
   {{\zeta }_{2}}  \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
   {{\delta }_{1}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{11}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{21}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{111}}  \\&lt;br /&gt;
   {{\epsilon }_{121}}  \\&lt;br /&gt;
   {{\epsilon }_{131}}  \\&lt;br /&gt;
   {{\epsilon }_{112}}  \\&lt;br /&gt;
   {{\epsilon }_{122}}  \\&lt;br /&gt;
   {{\epsilon }_{132}}  \\&lt;br /&gt;
   {{\epsilon }_{211}}  \\&lt;br /&gt;
   {{\epsilon }_{221}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{332}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;, the sum of squares for the ANOVA model and the extra sum of squares for each of the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Sum of Squares for the Model====&lt;br /&gt;
&lt;br /&gt;
The model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, for the ANOVA model of this example can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 9.9256  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since seven effect terms (&amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;) are used in the model the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; is seven (&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=7\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The total sum of squares can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 10.7178  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are 18 observed response values, the number of degrees of freedom associated with the total sum of squares is 17 (&amp;lt;math&amp;gt;dof(S{{S}_{T}})=17\,\!&amp;lt;/math&amp;gt;). The error sum of squares can now be obtained:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 10.7178-9.9256 \\ &lt;br /&gt;
= &amp;amp; 0.7922  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 17-7 \\ &lt;br /&gt;
= &amp;amp; 10  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are no true replicates of the treatments (as can be seen from the design of the previous figure, where all of the treatments are seen to be run just once), all of the error sum of squares is the sum of squares due to lack of fit. The lack of fit arises because the model used is not a full model since it is assumed that there are no interactions between blocks and other effects.&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Extra Sum of Squares for the Factors====&lt;br /&gt;
&lt;br /&gt;
The sequential sum of squares for the blocks can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{Block}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}})-S{{S}_{TR}}(\mu ) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y-0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones, &amp;lt;math&amp;gt;{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the hat matrix, which is calculated using &amp;lt;math&amp;gt;{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}={{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}{{(X_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}^{\prime }{{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}})}^{-1}}X_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}^{\prime }\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the matrix containing only the first three columns of the &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrix. Thus&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{Block}}= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y-0 \\ &lt;br /&gt;
= &amp;amp; 0.1944-0 \\ &lt;br /&gt;
= &amp;amp; 0.1944  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent block effects,    and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{Blocks}}\,\!&amp;lt;/math&amp;gt; is two (&amp;lt;math&amp;gt;dof(S{{S}_{Blocks}})=2\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Similarly, the sequential sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}},{{\tau }_{1}},{{\tau }_{2}})-S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}}) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}},{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-{{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 4.7756-0.1944 \\ &lt;br /&gt;
= &amp;amp; 4.5812  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sequential sum of squares for the other effects are obtained as &amp;lt;math&amp;gt;S{{S}_{B}}=4.9089\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{AB}}=0.2411\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Test Statistics====&lt;br /&gt;
&lt;br /&gt;
Knowing the sum of squares, the test statistics for each of the factors can be calculated. For example, the test statistic for the main effect of the blocks is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{Block}}= &amp;amp; \frac{M{{S}_{Block}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{Block}}/dof(S{{S}_{Blocks}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{0.1944/2}{0.7922/10} \\ &lt;br /&gt;
= &amp;amp; 1.227  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 10 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{Block}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.6663 \\ &lt;br /&gt;
= &amp;amp; 0.3337  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;gt; 0.1, we fail to reject &amp;lt;math&amp;gt;{{H}_{0}}:{{\zeta }_{i}}=0\,\!&amp;lt;/math&amp;gt; and conclude that there is no significant variation in the mileage from one vehicle to the other. Statistics to test the significance of other factors can be calculated in a similar manner. The complete analysis results obtained from DOE++ for this experiment are presented in the following figure.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.14.png|thumb|center|644px|Analysis results for the experiment in the [[ANOVA_for_Designed_Experiments#Example_3| example]].]]&lt;br /&gt;
&lt;br /&gt;
==Use of Regression to Calculate Sum of Squares==&lt;br /&gt;
&lt;br /&gt;
This section explains the reason behind the use of regression in DOE++ in all calculations related to the sum of squares. A number of textbooks present the method of direct summation to calculate the sum of squares. But this method is only applicable for balanced designs and may give incorrect results for unbalanced designs. For example, the sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; in a balanced factorial experiment with two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, is given as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{A}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,{{n}_{b}}n{{({{{\bar{y}}}_{i..}}-{{{\bar{y}}}_{...}})}^{2}} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{b}}n}-\frac{y_{...}^{2}}{{{n}_{a}}{{n}_{b}}n}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{n}_{b}}\,\!&amp;lt;/math&amp;gt; represents the levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; represents the number of samples for each combination of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The term &amp;lt;math&amp;gt;{{\bar{y}}_{i..}}\,\!&amp;lt;/math&amp;gt; is the mean value for the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{y}_{i..}}\,\!&amp;lt;/math&amp;gt; is the sum of all observations at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{y}_{...}}\,\!&amp;lt;/math&amp;gt; is the sum of all observations.&lt;br /&gt;
&lt;br /&gt;
The analogous term to calculate &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; in the case of an unbalanced design is given as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{A}}=\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\frac{y_{...}^{2}}{{{n}_{..}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{n}_{i.}}\,\!&amp;lt;/math&amp;gt; is the number of observations at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{n}_{..}}\,\!&amp;lt;/math&amp;gt; is the total number of observations. Similarly, to calculate the sum of squares for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;, the formulas are given as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{B}}= &amp;amp; \underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}-\frac{y_{...}^{2}}{{{n}_{..}}} \\ &lt;br /&gt;
 &amp;amp; S{{S}_{AB}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{ij.}^{2}}{{{n}_{ij}}}-\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}+\frac{y_{...}^{2}}{{{n}_{..}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Applying these relations to the unbalanced data of the last table, the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{AB}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{ij.}^{2}}{{{n}_{ij}}}-\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}+\frac{y_{...}^{2}}{{{n}_{..}}} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; \left( {{6}^{2}}+{{4}^{2}}+\frac{{{(42+6)}^{2}}}{2}+{{12}^{2}} \right)-\left( \frac{{{10}^{2}}}{2}+\frac{{{60}^{2}}}{3} \right) \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; -\left( \frac{{{54}^{2}}}{3}+\frac{{{16}^{2}}}{2} \right)+\frac{{{70}^{2}}}{5} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; -22  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
which is obviously incorrect since the sum of squares cannot be negative. For a detailed discussion on this refer to [[DOE References|Searle(1997, 1971)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.6.png|thumb|center|400px|Example of an unbalanced design.]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
The correct sum of squares can be calculated as shown next. The &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrices for the design of the last table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   4  \\&lt;br /&gt;
   6  \\&lt;br /&gt;
   12  \\&lt;br /&gt;
   42  \\&lt;br /&gt;
\end{matrix} \right]\text{   and   }X=\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{AB}}={{y}^{\prime }}[H-(1/5)J]y-{{y}^{\prime }}[{{H}_{\tilde{\ }AB}}-(1/5)J]y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones. The matrix &amp;lt;math&amp;gt;{{H}_{\tilde{\ }AB}}\,\!&amp;lt;/math&amp;gt; can be calculated using &amp;lt;math&amp;gt;{{H}_{\tilde{\ }AB}}={{X}_{\tilde{\ }AB}}{{(X_{\tilde{\ }AB}^{\prime }{{X}_{\tilde{\ }AB}})}^{-1}}X_{\tilde{\ }AB}^{\prime }\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;{{X}_{\tilde{\ }AB}}\,\!&amp;lt;/math&amp;gt; is the design matrix, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, excluding the last column that represents the interaction effect &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;. Thus, the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{AB}}= &amp;amp; {{y}^{\prime }}[H-(1/5)J]y-{{y}^{\prime }}[{{H}_{\tilde{\ }AB}}-(1/5)J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 368-339.4286 \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 28.5714  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the value that is calculated by DOE++ (see the first figure below, for the experiment design and the second figure below for the analysis).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.15.png|thumb|center|471px|Unbalanced experimental design for the data in the last table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.16.png|thumb|center|471px|Analysis for the unbalanced data in the last table.]]&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ANOVA_for_Designed_Experiments&amp;diff=65258</id>
		<title>ANOVA for Designed Experiments</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ANOVA_for_Designed_Experiments&amp;diff=65258"/>
		<updated>2017-08-26T02:52:10Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Box-Cox Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Doebook|5}}&lt;br /&gt;
In [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], methods were presented to model the relationship between a response and the associated factors (referred to as predictor variables in the context of regression) based on an observed data set. Such studies, where observed values of the response are used to establish an association between the response and the factors, are called &#039;&#039;observational studies&#039;&#039;. However, in the case of observational studies, it is difficult to establish a cause-and-effect relationship between the observed factors and the response. This is because a number of alternative justifications can be used to explain the observed change in the response values. For example, a regression model fitted to data on the population of cities and road accidents might show a positive regression relation. However, this relation does not imply that an increase in a city&#039;s population causes an increase in road accidents. It could be that a number of other factors such as road conditions, traffic control and the degree to which the residents of the city follow the traffic rules affect the number of road accidents in the city and the increase in the number of accidents seen in the study is caused by these factors. Since the observational study does not take the effect of these factors into account, the assumption that an increase in a city&#039;s population will lead to an increase in road accidents is not a valid one. For example, the population of a city may increase but road accidents in the city may decrease because of better traffic control. To establish a cause-and-effect relationship, the study should be conducted in such a way that the effect of all other factors is excluded from the investigation.&lt;br /&gt;
&lt;br /&gt;
The studies that enable the establishment of a cause-and-effect relationship are called &#039;&#039;experiments&#039;&#039;. In experiments the response is investigated by studying only the effect of the factor(s) of interest and excluding all other effects that may provide alternative justifications to the observed change in response. This is done in two ways. First, the levels of the factors to be investigated are carefully selected and then strictly controlled during the execution of the experiment. The aspect of selecting what factor levels should be investigated in the experiment is called the &#039;&#039;design&#039;&#039; of the experiment. The second distinguishing feature of experiments is that observations in an experiment are recorded in a random order. By doing this, it is hoped that the effect of all other factors not being investigated in the experiment will get cancelled out so that the change in the response is the result of only the investigated factors. Using these two techniques, experiments tend to ensure that alternative justifications to observed changes in the response are voided, thereby enabling the establishment of a cause-and-effect relationship between the response and the investigated factors.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Randomization&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The aspect of recording observations in an experiment in a random order is referred to as &#039;&#039;randomization&#039;&#039;. Specifically, randomization is the process of assigning the various levels of the investigated factors to the experimental units in a random fashion.  An experiment is said to be &#039;&#039;completely randomized&#039;&#039; if the probability of an experimental unit to be subjected to any level of a factor is equal for all the experimental units. The importance of randomization can be illustrated using an example. Consider an experiment where the effect of the speed of a lathe machine on the surface finish of a product is being investigated. In order to save time, the experimenter records surface finish values by running the lathe machine continuously and recording observations in the order of increasing speeds. The analysis of the experiment data shows that an increase in lathe speeds causes a decrease in the quality of surface finish. However the results of the experiment are disputed by the lathe operator who claims that he has been able to obtain better surface finish quality in the products by operating the lathe machine at higher speeds. It is later found that the faulty results were caused because of overheating of the tool used in the machine. Since the lathe was run continuously in the order of increased speeds the observations were recorded in the order of increased tool temperatures. This problem could have been avoided if the experimenter had randomized the experiment and taken reading at the various lathe speeds in a random fashion. This would require the experimenter to stop and restart the machine at every observation, thereby keeping the temperature of the tool within a reasonable range. Randomization would have ensured that the effect of heating of the machine tool is not included in the experiment.&lt;br /&gt;
&lt;br /&gt;
==Analysis of Single Factor Experiments==&lt;br /&gt;
&lt;br /&gt;
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], the analysis of observational studies involves the use of regression models. The analysis of experimental studies involves the use of analysis of variance (ANOVA) models. For a comparison of the two models see [[ANOVA_for_Designed_Experiments#Fitting_ANOVA_Models|Fitting ANOVA Models]]. In single factor experiments, ANOVA models are used to compare the mean response values at different levels of the factor. Each level of the factor is investigated to see if the response is significantly different from the response at other levels of the factor.  The analysis of single factor experiments is often referred to as &#039;&#039;one-way ANOVA&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To illustrate the use of ANOVA models in the analysis of experiments, consider a single factor experiment where the analyst wants to see if the surface finish of certain parts is affected by the speed of a lathe machine. Data is collected for three speeds (or three treatments). Each treatment is replicated four times. Therefore, this experiment design is balanced. Surface finish values recorded using randomization are shown in the following table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.1.png|thumb|center|400px|Surface finish values for three speeds of a lathe machine.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be stated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}={{\mu }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model assumes that the response at each factor level, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, is the sum of the mean response at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, and a random error term, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. The subscript &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; denotes the factor level while the subscript &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; denotes the replicate. If there are &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of the factor and &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates at each level then &amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j=1,2,...,m\,\!&amp;lt;/math&amp;gt;. The random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are assumed to be normally and independently distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. Therefore, the response at each level can be thought of as a normally distributed population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and constant variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. The equation given above is referred to as the &#039;&#039;means model&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The ANOVA model of the means model can also be written using &amp;lt;math&amp;gt;{{\mu }_{i}}=\mu +{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the effect due to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Such an ANOVA model is called the &#039;&#039;effects model&#039;&#039;.  In the effects models the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, represent the deviations from the overall mean, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;. Therefore, the following constraint exists on the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;s: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Fitting ANOVA Models===&lt;br /&gt;
&lt;br /&gt;
To fit ANOVA models and carry out hypothesis testing in single factor experiments, it is convenient to express the effects model of the effects model in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt; (that was used for multiple linear regression models in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This can be done as shown next. Using the effects model, the ANOVA model for the single factor experiment in the first table can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment effect. There are three treatments in the first table (500, 600 and 700). Therefore, there are three treatment effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{3}}\,\!&amp;lt;/math&amp;gt;. The following constraint exists for these effects:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   } {{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the first treatment, the ANOVA model for the single factor experiment in the above table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{1j}}=\mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+0\cdot {{\tau }_{3}}+{{\epsilon }_{1j}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;, the model for the first treatment is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}-0\cdot ({{\tau }_{1}}+{{\tau }_{2}})+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{or   }{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Models for the second and third treatments can be obtained in a similar way. The models for the three treatments are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{First Treatment}: &amp;amp; {{Y}_{1j}}=1\cdot \mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{Second Treatment}: &amp;amp; {{Y}_{2j}}=1\cdot \mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{2j}} \\ &lt;br /&gt;
\text{Third Treatment}: &amp;amp; {{Y}_{3j}}=1\cdot \mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{3j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coefficients of the treatment effects &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; can be expressed using two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the indicator variables &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, the ANOVA model for the data in the first table now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{x}_{1}}\cdot {{\tau }_{1}}+{{x}_{2}}\cdot {{\tau }_{2}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation can be rewritten by including subscripts &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; (for the level of the factor) and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; (for the replicate number) as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{x}_{i1}}\cdot {{\tau }_{1}}+{{x}_{i2}}\cdot {{\tau }_{2}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation given above represents the &amp;quot;regression version&amp;quot; of the ANOVA model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Treat Numerical Factors as Qualitative or Quantitative ?===&lt;br /&gt;
&lt;br /&gt;
It can be seen from the equation given above that in an ANOVA model each factor is treated as a qualitative factor. In the present example the factor, lathe speed, is a quantitative factor with three levels. But the ANOVA model treats this factor as a qualitative factor with three levels. Therefore, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required to represent this factor. &lt;br /&gt;
&lt;br /&gt;
Note that in a regression model a variable can either be treated as a quantitative or a qualitative variable. The factor, lathe speed, would be used as a quantitative factor and represented with a single predictor variable in a regression model. For example, if a first order model were to be fitted to the data in the first table, then the regression model would take the form &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. If a second order regression model were to be fitted, the regression model would be &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}x_{i1}^{2}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. Notice that unlike these regression models, the regression version of the ANOVA model does not make any assumption about the nature of relationship between the response and the factor being investigated.&lt;br /&gt;
&lt;br /&gt;
The choice of treating a particular factor as a quantitative or qualitative variable depends on the objective of the experimenter. In the case of the data of the first table, the objective of the experimenter is to compare the levels of the factor to see if change in the levels leads to a significant change in the response.  The objective is not to make predictions on the response for a given level of the factor. Therefore, the factor is treated as a qualitative factor in this case. If the objective of the experimenter were prediction or optimization, the experimenter would focus on aspects such as the nature of relationship between the factor, lathe speed, and the response, surface finish, so that the factor should be modeled as a quantitative factor to make accurate predictions.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model in the Form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be expanded for the three treatments and four replicates of the data in the first table as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{11}}= &amp;amp; 6=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{11}}\text{    Level 1, Replicate 1} \\ &lt;br /&gt;
{{Y}_{21}}= &amp;amp; 13=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{21}}\text{  Level 2, Replicate 1} \\ &lt;br /&gt;
{{Y}_{31}}= &amp;amp; 23=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{31}}\text{  Level 3, Replicate 1} \\ &lt;br /&gt;
{{Y}_{12}}= &amp;amp; 13=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{12}}\text{  Level 1, Replicate 2} \\ &lt;br /&gt;
{{Y}_{22}}= &amp;amp; 16=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{22}}\text{  Level 2, Replicate 2} \\ &lt;br /&gt;
{{Y}_{32}}= &amp;amp; 20=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{32}}\text{  Level 3, Replicate 2} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; ... \\ &lt;br /&gt;
{{Y}_{34}}= &amp;amp; 18=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{34}}\text{  Level 3, Replicate 4}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The corresponding matrix notation is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{11}}  \\&lt;br /&gt;
   {{Y}_{21}}  \\&lt;br /&gt;
   {{Y}_{31}}  \\&lt;br /&gt;
   {{Y}_{12}}  \\&lt;br /&gt;
   {{Y}_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{34}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y= &amp;amp; X\beta +\epsilon  \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   23  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   16  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The matrices &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are used in the calculation of the sum of squares in the next section. The data in the first table can be entered into DOE++ as shown in the figure below.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.1.png|thumb|center|550px|Single factor experiment design for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Test in Single Factor Experiments===&lt;br /&gt;
&lt;br /&gt;
The hypothesis test in single factor experiments examines the ANOVA model to see if the response at any level of the investigated factor is significantly different from that at the other levels. If this is not the case and the response at all levels is not significantly different, then it can be concluded that the investigated factor does not affect the response. The test on the ANOVA model is carried out by checking to see if any of the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, are non-zero. The test is similar to the test of significance of regression mentioned in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] in the context of regression models. The hypotheses statements for this test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0 \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\tau }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test for &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is carried out using the following statistic:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{TR}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the ANOVA model and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. Note that in the case of ANOVA models we use the notation &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment mean square) for the model mean square and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment sum of squares) for the model sum of squares (instead of &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression mean square, and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression sum of squares, used in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This is done to indicate that the model under consideration is the ANOVA model and not the regression model. The calculations to obtain &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; are identical to the calculations to obtain &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt; explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The sum of squares to obtain the statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt; can be calculated as explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. Using the data in the first table, the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.1667 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.1667 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.1667  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 232.1667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the number of levels of the factor, &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; represents the replicates at each level, &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; represents the vector of the response values, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; represents the matrix of ones. (For details on each of these terms, refer to [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].)&lt;br /&gt;
Since two effect terms, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are used in the regression version of the ANOVA model, the degrees of freedom associated with the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, is two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be obtained as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.9167 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.9167 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.9167  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 306.6667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;I\,\!&amp;lt;/math&amp;gt; is the identity matrix. Since there are 12 data points in all, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; is 11. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{T}})=11\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 306.6667-232.1667 \\ &lt;br /&gt;
= &amp;amp; 74.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 11-2 \\ &lt;br /&gt;
= &amp;amp; 9  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic can now be calculated using the equation given in [[ANOVA_for_Designed_Experiments#Hypothesis_Test_in_Single_Factor_Experiments|Hypothesis Test in Single Factor Experiments]] as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{0}}= &amp;amp; \frac{M{{S}_{TR}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{TR}}/dof(S{{S}_{TR}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{232.1667/2}{74.5/9} \\ &lt;br /&gt;
= &amp;amp; 14.0235  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value for the statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 9 degrees of freedom in the denominator is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{f}_{0}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.9983 \\ &lt;br /&gt;
= &amp;amp; 0.0017  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that change in the lathe speed has a significant effect on the surface finish. DOE++ displays these results in the ANOVA table, as shown in the figure below. The values of S and R-sq are the standard error and the coefficient of determination for the model, respectively. These values are explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] and indicate how well the model fits the data. The values in the figure below indicate that the fit of the ANOVA model is fair.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.2.png|thumb|center|650px|ANOVA table for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the &#039;&#039;i&#039;&#039;&amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; Treatment Mean===&lt;br /&gt;
&lt;br /&gt;
The response at each treatment of a single factor experiment can be assumed to be a normal population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt; provided that the error terms can be assumed to be normally distributed. A point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is the average response at each treatment, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;. Since this is a sample average, the associated variance is &amp;lt;math&amp;gt;{{\sigma }^{2}}/{{m}_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;{{m}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of replicates at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment. Therefore, the confidence interval on &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution. Recall from [[Statistical_Background_on_DOE| Statistical Background on DOE]] (inference on population mean when variance is unknown) that: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}/{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{M{{S}_{E}}/{{m}_{i}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
has a &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with degrees of freedom &amp;lt;math&amp;gt;=dof(S{{S}_{E}})\,\!&amp;lt;/math&amp;gt;. Therefore, a 100 (&amp;lt;math&amp;gt;1-\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment mean, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, for the first treatment of the lathe speed we have:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}= &amp;amp; {{{\bar{y}}}_{1\cdot }} \\ &lt;br /&gt;
= &amp;amp; \frac{6+13+7+8}{4} \\ &lt;br /&gt;
= &amp;amp; 8.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In DOE++, this value is displayed as the Estimated Mean for the first level, as shown in the Data Summary table in the figure below. The value displayed as the standard deviation for this level is simply the sample standard deviation calculated using the observations corresponding to this level.  The 90% confidence interval for this treatment is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{M{{S}_{E}}}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833\sqrt{\frac{(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833(1.44) \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 2.64  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}\,\!&amp;lt;/math&amp;gt; are 5.9 and 11.1, respectively. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.3.png|thumb|center|650px|Data Summary table for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the Difference in Two Treatment Means===&lt;br /&gt;
&lt;br /&gt;
The confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is used to compare two levels of the factor at a given significance. If the confidence interval does not include the value of zero, it is concluded that the two levels of the factor are significantly different. The point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt;. The variance for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})= &amp;amp; var({{{\bar{y}}}_{i\cdot }})+var({{{\bar{y}}}_{j\cdot }}) \\ &lt;br /&gt;
= &amp;amp; {{\sigma }^{2}}/{{m}_{i}}+{{\sigma }^{2}}/{{m}_{j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For balanced designs all &amp;lt;math&amp;gt;{{m}_{i}}=m\,\!&amp;lt;/math&amp;gt;. Therefore:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})=2{{\sigma }^{2}}/m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The standard deviation for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; can be obtained by taking the square root of &amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})\,\!&amp;lt;/math&amp;gt; and is referred to as the pooled standard error:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for the difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2{{{\hat{\sigma }}}^{2}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2M{{S}_{E}}/m}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then a 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, an estimate of the difference in the first and second treatment means of the lathe speed, &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}-{{{\hat{\mu }}}_{2}}= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }} \\ &lt;br /&gt;
= &amp;amp; 8.5-13.25 \\ &lt;br /&gt;
= &amp;amp; -4.75  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The pooled standard error for this difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2M{{S}_{E}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 2.0344  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To test &amp;lt;math&amp;gt;{{H}_{0}}:{{\mu }_{1}}-{{\mu }_{2}}=0\,\!&amp;lt;/math&amp;gt;, the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}-({{\mu }_{1}}-{{\mu }_{2}})}{\sqrt{2M{{S}_{E}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75-(0)}{\sqrt{\tfrac{2(74.5/9)}{4}}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75}{2.0344} \\ &lt;br /&gt;
= &amp;amp; -2.3348  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In DOE++, the value of the statistic is displayed in the Mean Comparisons table under the column T Value as shown in the figure below. The 90% confidence interval on the difference &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833\sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833(2.0344) \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 3.729  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hence the 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-8.479\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1.021\,\!&amp;lt;/math&amp;gt;, respectively. These values are displayed under the Low CI and High CI columns in the following figure. Since the confidence interval for this pair of means does not included zero, it can be concluded that these means are significantly different at 90% confidence. This conclusion can also be arrived at using the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value noting that the hypothesis is two-sided. The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to the statistic &amp;lt;math&amp;gt;{{t}_{0}}=-2.3348\,\!&amp;lt;/math&amp;gt;, based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with 9 degrees of freedom is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 2\times (1-P(T\le |{{t}_{0}}|)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-P(T\le 2.3348)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-0.9778) \\ &lt;br /&gt;
= &amp;amp; 0.0444  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, the means are significantly different at 90% confidence. Bounds on the difference between other treatment pairs can be obtained in a similar manner and it is concluded that all treatments are significantly different.&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
&lt;br /&gt;
Plots of residuals, &amp;lt;math&amp;gt;{{e}_{ij}}\,\!&amp;lt;/math&amp;gt;, similar to the ones discussed in the previous chapters on regression, are used to ensure that the assumptions associated with the ANOVA model are not violated.  The ANOVA model assumes that the random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are normally and independently distributed with the same variance for each treatment. The normality assumption can be checked by obtaining a normal probability plot of the residuals. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.4.png|thumb|center|644px|Mean Comparisons table for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equality of variance is checked by plotting residuals against the treatments and the treatment averages, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;  (also referred to as fitted values), and inspecting the spread in the residuals. If a pattern is seen in these plots, then this indicates the need to use a suitable transformation on the response that will ensure variance equality. Box-Cox transformations are discussed in the next section. To check for independence of the random error terms residuals are plotted against time or run-order to ensure that a pattern does not exist in these plots. Residual plots for the given example are shown in the following two figures. The plots show that the assumptions associated with the ANOVA model are not violated.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.5.png|thumb|center|550px|Normal probability plot of residuals for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.6.png|thumb|center|550px|Plot of residuals against fitted values for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Box-Cox Method===&lt;br /&gt;
&lt;br /&gt;
Transformations on the response may be used when residual plots for an experiment show a pattern. This indicates that the equality of variance does not hold for the residuals of the given model. The Box-Cox method can be used to automatically identify a suitable power transformation for the data based on the following relationship:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{*}}={{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is determined using the given data such that &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is minimized. The values of &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; are not used as is because of issues related to calculation or comparison of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values for different values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. For example, for &amp;lt;math&amp;gt;\lambda =0\,\!&amp;lt;/math&amp;gt; all response values will become 1. Therefore, the following relationship is used to obtain &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{\lambda }}=\{\begin{matrix}&lt;br /&gt;
   \frac{{{Y}^{\lambda }}-1}{\lambda {{{\dot{y}}}^{\lambda -1}}}  \\&lt;br /&gt;
   \dot{y}\ln y  \\&lt;br /&gt;
\end{matrix}\text{    }\begin{matrix}&lt;br /&gt;
   \lambda \ne 0\begin{matrix}&lt;br /&gt;
     \\&lt;br /&gt;
     \\&lt;br /&gt;
\end{matrix}  \\&lt;br /&gt;
   \lambda =0  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\dot{y}=\exp \left[ \frac{1}{n}\sum\limits_{i=1}^{n}y_{i}\right]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Once all &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; values are obtained for a value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, the corresponding &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for these values is obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;. The process is repeated for a number of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values to obtain a plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is selected as the required transformation for the given data. DOE++ plots &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values because the range of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values is large and if this is not done, all values cannot be displayed on the same plot. The range of search for the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value in the software is from &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt;, because larger values of of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are usually not meaningful. DOE++ also displays a recommended transformation based on the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value obtained as per the table below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.2.png|thumb|center|400px|Recommended Box-Cox power transformations.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Confidence intervals on the selected &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values are also available. Let &amp;lt;math&amp;gt;S{{S}_{E}}(\lambda )\,\!&amp;lt;/math&amp;gt; be the value of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; corresponding to the selected value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then, to calculate the 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence intervals on &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, we need to calculate &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}^{*}}=S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The required limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are the two values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the value &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; (on the plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;). If the limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; do not include the value of one, then the transformation is applicable for the given data.&lt;br /&gt;
Note that the power transformations are not defined for response values that are negative or zero. DOE++ deals with negative and zero response values using the following equations (that involve addition of a suitable quantity to all of the response values if a zero or negative response value is encountered). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; y(i)= &amp;amp; y(i)+\left| {{y}_{\min }} \right|\times 1.1\text{        Negative Response} \\ &lt;br /&gt;
 &amp;amp; y(i)= &amp;amp; y(i)+1\text{                          Zero Response}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;math&amp;gt;{{y}_{\min }}\,\!&amp;lt;/math&amp;gt; represents the minimum response value and &amp;lt;math&amp;gt;\left| {{y}_{\min }} \right|\,\!&amp;lt;/math&amp;gt; represents the absolute value of the minimum response. &lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
To illustrate the Box-Cox method, consider the experiment given in the first table. Transformed response values for various values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be calculated using the equation for &amp;lt;math&amp;gt;{Y}^{\lambda}\,\!&amp;lt;/math&amp;gt; given in [[ANOVA_for_Designed_Experiments#Box-Cox_Method|Box-Cox Method]]. Knowing the hat matrix, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values corresponding to each of these &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values can easily be obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values calculated for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values between &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt; for the given data are shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \lambda  &amp;amp; {} &amp;amp; S{{S}_{E}} &amp;amp; \ln S{{S}_{E}}  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   -5 &amp;amp; {} &amp;amp; 5947.8 &amp;amp; 8.6908  \\&lt;br /&gt;
   -4 &amp;amp; {} &amp;amp; 1946.4 &amp;amp; 7.5737  \\&lt;br /&gt;
   -3 &amp;amp; {} &amp;amp; 696.5 &amp;amp; 6.5461  \\&lt;br /&gt;
   -2 &amp;amp; {} &amp;amp; 282.2 &amp;amp; 5.6425  \\&lt;br /&gt;
   -1 &amp;amp; {} &amp;amp; 135.8 &amp;amp; 4.9114  \\&lt;br /&gt;
   0 &amp;amp; {} &amp;amp; 83.9 &amp;amp; 4.4299  \\&lt;br /&gt;
   1 &amp;amp; {} &amp;amp; 74.5 &amp;amp; 4.3108  \\&lt;br /&gt;
   2 &amp;amp; {} &amp;amp; 101.0 &amp;amp; 4.6154  \\&lt;br /&gt;
   3 &amp;amp; {} &amp;amp; 190.4 &amp;amp; 5.2491  \\&lt;br /&gt;
   4 &amp;amp; {} &amp;amp; 429.5 &amp;amp; 6.0627  \\&lt;br /&gt;
   5 &amp;amp; {} &amp;amp; 1057.6 &amp;amp; -6.9638  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A plot of &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for various &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values, as obtained from DOE++, is shown in the following figure. The value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; that gives the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is identified as 0.7841. The &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; value corresponding to this value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is 73.74. A 90% confidence interval on this &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value is calculated as follows. &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; can be obtained as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}^{*}}= &amp;amp; S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{t_{0.05,9}^{2}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{{{1.833}^{2}}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 101.27  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\ln S{{S}^{*}}=4.6178\,\!&amp;lt;/math&amp;gt;. The &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values corresponding to this value from the following figure are &amp;lt;math&amp;gt;-0.4686\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0052\,\!&amp;lt;/math&amp;gt;. Therefore, the 90% confidence limits on are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0055\,\!&amp;lt;/math&amp;gt;. Since the confidence limits include the value of 1, this indicates that a transformation is not required for the data in the first table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:doe6.7.png|thumb|center|400px|Box-Cox power transformation plot for the data in the first table.]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Experiments with Several Factors - Factorial Experiments==&lt;br /&gt;
&lt;br /&gt;
Experiments with two or more factors are encountered frequently. The best way to carry out such experiments is by using  factorial experiments.  Factorial experiments are experiments in which all combinations of factors are investigated in each replicate of the experiment. Factorial experiments are the only means to completely and systematically study interactions between factors in addition to identifying significant factors.  One-factor-at-a-time experiments (where each factor is investigated separately by keeping all the remaining factors constant) do not reveal the interaction effects between the factors. Further, in one-factor-at-a-time experiments full randomization is not possible.&lt;br /&gt;
&lt;br /&gt;
To illustrate factorial experiments consider an experiment where the response is investigated for two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. Assume that the response is studied at two levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;{{A}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; representing the lower level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{A}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; representing the higher level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;. Similarly, let &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; represent the two levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; that are being investigated in this experiment. Since there are two factors with two levels, a total of &amp;lt;math&amp;gt;2\times 2=4\,\!&amp;lt;/math&amp;gt; combinations exist (&amp;lt;math&amp;gt;{{A}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; - &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt;,    - &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{A}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; - &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt;,    - &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt;). Thus, four runs are required for each replicate if a factorial experiment is to be carried out in this case. Assume that the response values for each of these four possible combinations are obtained as shown in the third table.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.3.png|thumb|center|400px|Two-factor factorial experiment.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.8.png|thumb|center|400px|Interaction plot for the data in the third table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Investigating Factor Effects===&lt;br /&gt;
&lt;br /&gt;
The effect of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response can be obtained by taking the difference between the average response when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is high and the average response when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is low. The change in the response due to a change in the level of a factor is called the main effect of the factor. The main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; as per the response values in the third table is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{45+55}{2}-\frac{25+35}{2} \\ &lt;br /&gt;
= &amp;amp; 50-30 \\ &lt;br /&gt;
= &amp;amp; 20  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from the lower level to the higher level, the response increases by 20 units. A plot of the response for the two levels of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at different levels of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is shown in the figure above. The plot shows that change in the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; leads to an increase in the response by 20 units regardless of the level of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. Therefore, no interaction exists in this case as indicated by the parallel lines on the plot. The main effect of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be obtained as: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
B= &amp;amp; Average\text{ }response\text{ }at\text{ }{{B}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{B}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{35+55}{2}-\frac{25+45}{2} \\ &lt;br /&gt;
= &amp;amp; 45-35 \\ &lt;br /&gt;
= &amp;amp; 10  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Investigating Interactions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now assume that the response values for each of the four treatment combinations were obtained as shown in the fourth table. The main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; in this case is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{40+10}{2}-\frac{20+30}{2} \\ &lt;br /&gt;
= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.4.png|thumb|center|400px|Two factor factorial experiment.]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
It appears that &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; does not have an effect on the response. However, a plot of the response of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at different levels of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; shows that the response does change with the levels of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; but the effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response is dependent on the level of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (see the figure below). Therefore, an interaction between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; exists in this case (as indicated by the non-parallel lines of the figure). The interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be calculated as follows: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.9.png|thumb|center|400px|Interaction plot for the data in the fourth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
AB= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}\text{-}{{B}_{\text{high}}}\text{ }and\text{ }{{A}_{\text{low}}}\text{-}{{B}_{\text{low}}}- \\ &lt;br /&gt;
 &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}}\text{-}{{B}_{\text{high}}}\text{ }and\text{ }{{A}_{\text{high}}}\text{-}{{B}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{10+20}{2}-\frac{40+30}{2} \\ &lt;br /&gt;
= &amp;amp; -20  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that in this case, if a one-factor-at-a-time experiment were used to investigate the effect of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response, it would lead to incorrect conclusions. For example, if the response at factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was studied by holding &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; constant at its lower level, then the main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would be obtained as &amp;lt;math&amp;gt;40-20=20\,\!&amp;lt;/math&amp;gt;, indicating that the response increases by 20 units when the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from low to high. On the other hand, if the response at factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was studied by holding &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; constant at its higher level than the main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would be obtained as &amp;lt;math&amp;gt;10-30=-20\,\!&amp;lt;/math&amp;gt;, indicating that the response decreases by 20 units when the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from low to high.&lt;br /&gt;
&lt;br /&gt;
==Analysis of General Factorial Experiments==&lt;br /&gt;
&lt;br /&gt;
In DOE++, factorial experiments are referred to as &#039;&#039;factorial designs&#039;&#039;. The experiments explained in this section are referred to as g&#039;&#039;eneral factorial designs&#039;&#039;. This is done to distinguish these experiments from the other factorial designs supported by DOE++ (see the figure below). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.10.png|thumb|center|518px|Factorial experiments available in DOE++.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The other designs (such as the two level full factorial designs that are explained in [[Two_Level_Factorial_Experiments| Two Level Factorial Experiments]]) are special cases of these experiments in which factors are limited to a specified number of levels. The ANOVA model for the analysis of factorial experiments is formulated as shown next. Assume a factorial experiment in which the effect of two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, on the response is being investigated. Let there be &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{n}_{b}}\,\!&amp;lt;/math&amp;gt; levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The ANOVA model for this experiment can be stated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\tau }_{i}}+{{\delta }_{j}}+{{(\tau \delta )}_{ij}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean effect&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;j=1,2,...,{{n}_{b}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt; represents the random error terms (which are assumed to be normally distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*and the subscript &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates (&amp;lt;math&amp;gt;k=1,2,...,m\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represent deviations from the overall mean, the following constraints exist: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{j=1}{\overset{{{n}_{b}}}{\mathop \sum }}\,{{\delta }_{j}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{j=1}{\overset{{{n}_{b}}}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Tests in General Factorial Experiments===&lt;br /&gt;
These tests are used to check whether each of the factors investigated in the experiment is significant or not. For the previous example, with two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, and their interaction, &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;, the statements for the hypothesis tests can be formulated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \begin{matrix}&lt;br /&gt;
   1. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0\text{    (Main effect of }A\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\tau }_{i}}\ne 0\text{    for at least one }i \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   2. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\delta }_{1}}={{\delta }_{2}}=...={{\delta }_{{{n}_{b}}}}=0\text{    (Main effect of }B\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\delta }_{j}}\ne 0\text{    for at least one }j \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   3. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{12}}=...={{(\tau \delta )}_{{{n}_{a}}{{n}_{b}}}}=0\text{    (Interaction }AB\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{(\tau \delta )}_{ij}}\ne 0\text{    for at least one }ij  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistics for the three tests are as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::1)&amp;lt;math&amp;gt;{(F_{0})}_{A} = \frac{MS_{A}}{MS_{E}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{A}\,\!&amp;lt;/math&amp;gt; is the mean square due to factor &amp;lt;math&amp;gt;{A}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{MS_E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::2)&amp;lt;math&amp;gt;{(F_{0})_{B}} = \frac{MS_B}{MS_E}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{B}\,\!&amp;lt;/math&amp;gt; is the mean square due to factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;MS_{E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::3)&amp;lt;math&amp;gt;{(F_{0})_{AB}} = \frac{MS_{AB}}{MS_{E}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{AB}\,\!&amp;lt;/math&amp;gt; is the mean square due to interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;MS_{E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The tests are identical to the partial &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; test explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. The sum of squares for these tests (to obtain the mean squares) are calculated by splitting the model sum of squares into the extra sum of squares due to each factor. The extra sum of squares calculated for each of the factors may either be partial or sequential.  For the present example, if the extra sum of squares used is sequential, then the model sum of squares can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{TR}}=S{{S}_{A}}+S{{S}_{B}}+S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{B}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to factor    and &amp;lt;math&amp;gt;S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
The mean squares are obtained by dividing the sum of squares by the associated degrees of freedom. Once the mean squares are known the test statistics can be calculated. For example, the test statistic to test the significance of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (or the hypothesis &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;) can then be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{F}_{0}})}_{A}}= &amp;amp; \frac{M{{S}_{A}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{A}}/dof(S{{S}_{A}})}{S{{S}_{E}}/dof(S{{S}_{E}})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Similarly the test statistic to test significance of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be respectively obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{F}_{0}})}_{B}}= &amp;amp; \frac{M{{S}_{B}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{B}}/dof(S{{S}_{B}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
{{({{F}_{0}})}_{AB}}= &amp;amp; \frac{M{{S}_{AB}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{AB}}/dof(S{{S}_{AB}})}{S{{S}_{E}}/dof(S{{S}_{E}})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is recommended to conduct the test for interactions before conducting the test for the main effects. This is because, if an interaction is present, then the main effect of the factor depends on the level of the other factors and looking at the main effect is of little value. However, if the interaction is absent then the main effects become important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
Consider an experiment to investigate the effect of speed and type of fuel additive used on the mileage of a sports utility vehicle. Three speeds and two types of fuel additives are investigated. Each of the treatment combinations are replicated three times. The mileage values observed are displayed in the fifth table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.5.png|thumb|center|400px|Mileage data for different speeds and fuel additive types.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The experimental design for the data in the fifth table is shown in the figure below. In the figure, the factor Speed is represented as factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and the factor Fuel Additive is represented as factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The experimenter would like to investigate if speed, fuel additive or the interaction between speed and fuel additive affects the mileage of the sports utility vehicle. In other words, the following hypotheses need to be tested:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \begin{matrix}&lt;br /&gt;
   1. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}={{\tau }_{3}}=0\text{   (No main effect of factor }A\text{, speed)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\tau }_{i}}\ne 0\text{    for at least one }i \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   2. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\delta }_{1}}={{\delta }_{2}}={{\delta }_{3}}=0\text{    (No main effect of factor }B\text{, fuel additive)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\delta }_{j}}\ne 0\text{    for at least one }j \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   3. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{12}}=...={{(\tau \delta )}_{33}}=0\text{    (No interaction }AB\text{)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{(\tau \delta )}_{ij}}\ne 0\text{    for at least one }ij  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistics for the three tests are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::1.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{A}}=\frac{M{{S}_{A}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{A}}\,\!&amp;lt;/math&amp;gt; is the mean square for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::2.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{B}}=\frac{M{{S}_{B}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{B}}\,\!&amp;lt;/math&amp;gt; is the mean square for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::3.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{AB}}=\frac{M{{S}_{AB}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; is the mean square for interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.11.png|thumb|center|639px|Experimental design for the data in the fifth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\tau }_{i}}+{{\delta }_{j}}+{{(\tau \delta )}_{ij}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed) with &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; =1, 2, 3; &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th treatment of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (fuel additive) with &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; =1, 2; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect. In order to calculate the test statistics, it is convenient to express the ANOVA model of the equation given above in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;. This can be done as explained next.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represent deviations from the overall mean, the following constraints exist.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or    }{{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}=0\,\!&amp;lt;/math&amp;gt;.) DOE++ displays only the independent effects because only these effects are important to the analysis. The independent effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1] and A[2] respectively because these are the effects associated with factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed).&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{j=1}{\overset{2}{\mathop \sum }}\,{{\delta }_{j}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or    }{{\delta }_{1}}+{{\delta }_{2}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only one of the &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is independent, &amp;lt;math&amp;gt;{{\delta }_{2}}=-{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effect as &amp;lt;math&amp;gt;{{H}_{0}}:{{\delta }_{1}}=0\,\!&amp;lt;/math&amp;gt;.) The independent effect &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is displayed as B:B in DOE++.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{and   }\underset{j=1}{\overset{2}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{31}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{12}}+{{(\tau \delta )}_{22}}+{{(\tau \delta )}_{32}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{and   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{12}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{22}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{31}}+{{(\tau \delta )}_{32}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last five equations given above represent four constraints, as only four of these five equations are independent. Therefore, only two out of the six &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are independent, the other four effects can be expressed in terms of these effects. (The null hypothesis to test the significance of interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{21}}=0\,\!&amp;lt;/math&amp;gt;.) The effects &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are displayed as A[1]B and A[2]B respectively in DOE++.&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be obtained using indicator variables, similar to the case of the single factor experiment in [[ANOVA_for_Designed_Experiments#Fitting_ANOVA_Models|Fitting ANOVA Models]]. Since factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; has three levels, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required which need to be coded as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has two levels and can be represented using one indicator variable, &amp;lt;math&amp;gt;{{x}_{3}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\delta }_{1}}: &amp;amp; {{x}_{3}}=1 \\ &lt;br /&gt;
\text{Treatment Effect }{{\delta }_{2}}: &amp;amp; {{x}_{3}}=-1  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; interaction will be represented by all possible terms resulting from the product of the indicator variables representing factors &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. There are two such terms here - &amp;lt;math&amp;gt;{{x}_{1}}{{x}_{3}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}{{x}_{3}}\,\!&amp;lt;/math&amp;gt;. The regression version of the ANOVA model can finally be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{\tau }_{1}}\cdot {{x}_{1}}+{{\tau }_{2}}\cdot {{x}_{2}}+{{\delta }_{1}}\cdot {{x}_{3}}+{{(\tau \delta )}_{11}}\cdot {{x}_{1}}{{x}_{3}}+{{(\tau \delta )}_{21}}\cdot {{x}_{2}}{{x}_{3}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In matrix notation this model can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{111}}  \\&lt;br /&gt;
   {{Y}_{211}}  \\&lt;br /&gt;
   {{Y}_{311}}  \\&lt;br /&gt;
   {{Y}_{121}}  \\&lt;br /&gt;
   {{Y}_{221}}  \\&lt;br /&gt;
   {{Y}_{321}}  \\&lt;br /&gt;
   {{Y}_{112}}  \\&lt;br /&gt;
   {{Y}_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{323}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; -1 &amp;amp; 0 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
   {{\delta }_{1}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{11}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{21}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{111}}  \\&lt;br /&gt;
   {{\epsilon }_{211}}  \\&lt;br /&gt;
   {{\epsilon }_{311}}  \\&lt;br /&gt;
   {{\epsilon }_{121}}  \\&lt;br /&gt;
   {{\epsilon }_{221}}  \\&lt;br /&gt;
   {{\epsilon }_{321}}  \\&lt;br /&gt;
   {{\epsilon }_{112}}  \\&lt;br /&gt;
   {{\epsilon }_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{323}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The vector &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; can be substituted with the response values from the fifth table to get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{111}}  \\&lt;br /&gt;
   {{Y}_{211}}  \\&lt;br /&gt;
   {{Y}_{311}}  \\&lt;br /&gt;
   {{Y}_{121}}  \\&lt;br /&gt;
   {{Y}_{221}}  \\&lt;br /&gt;
   {{Y}_{321}}  \\&lt;br /&gt;
   {{Y}_{112}}  \\&lt;br /&gt;
   {{Y}_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{323}}  \\&lt;br /&gt;
\end{matrix} \right]=\left[ \begin{matrix}&lt;br /&gt;
   17.3  \\&lt;br /&gt;
   18.9  \\&lt;br /&gt;
   17.1  \\&lt;br /&gt;
   18.7  \\&lt;br /&gt;
   19.1  \\&lt;br /&gt;
   18.8  \\&lt;br /&gt;
   17.8  \\&lt;br /&gt;
   18.2  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18.3  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;, the sum of squares for the ANOVA model and the extra sum of squares for each of the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Sum of Squares for the Model====&lt;br /&gt;
&lt;br /&gt;
The model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, for the regression version of the ANOVA model can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 9.7311  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones. Since five effect terms (&amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;) are used in the model, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; is five (&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=5\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 10.7178  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are 18 observed response values, the number of degrees of freedom associated with the total sum of squares is 17 (&amp;lt;math&amp;gt;dof(S{{S}_{T}})=17\,\!&amp;lt;/math&amp;gt;). The error sum of squares can now be obtained:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 10.7178-9.7311 \\ &lt;br /&gt;
= &amp;amp; 0.9867  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are three replicates of the full factorial experiment, all of the error sum of squares is pure error. (This can also be seen from the preceding figure, where each treatment combination of the full factorial design is repeated three times.) The number of degrees of freedom associated with the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 17-5 \\ &lt;br /&gt;
= &amp;amp; 12  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Extra Sum of Squares for the Factors====&lt;br /&gt;
&lt;br /&gt;
The sequential sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be calculated as: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}})-S{{S}_{TR}}(\mu ) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}={{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}{{(X_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}^{\prime }{{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}})}^{-1}}X_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}^{\prime }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the matrix containing only the first three columns of the &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrix. Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-0 \\ &lt;br /&gt;
= &amp;amp; 4.5811-0 \\ &lt;br /&gt;
= &amp;amp; 4.5811  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent effects (&amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;) for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, the degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; are two (&amp;lt;math&amp;gt;dof(S{{S}_{A}})=2\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Similarly, the sum of squares for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{B}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}})-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}}) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}}}-(\frac{1}{18})J]y-{{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 9.4900-4.5811 \\ &lt;br /&gt;
= &amp;amp; 4.9089  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there is one independent effect, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{B}}\,\!&amp;lt;/math&amp;gt; is one (&amp;lt;math&amp;gt;dof(S{{S}_{B}})=1\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{AB}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}},{{(\tau \delta )}_{11}},{{(\tau \delta )}_{21}})-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}) \\ &lt;br /&gt;
= &amp;amp; S{{S}_{TR}}-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}) \\ &lt;br /&gt;
= &amp;amp; 9.7311-9.4900 \\ &lt;br /&gt;
= &amp;amp; 0.2411  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent interaction effects, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; is two (&amp;lt;math&amp;gt;dof(S{{S}_{AB}})=2\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Calculation of the Test Statistics====&lt;br /&gt;
&lt;br /&gt;
Knowing the sum of squares, the test statistic for each of the factors can be calculated. Analyzing the interaction first, the test statistic for interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{AB}}= &amp;amp; \frac{M{{S}_{AB}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{0.2411/2}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 1.47  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic, based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator, is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{AB}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.7307 \\ &lt;br /&gt;
= &amp;amp; 0.2693  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;gt; 0.1, we fail to reject &amp;lt;math&amp;gt;{{H}_{0}}:{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt; and conclude that the interaction between speed and fuel additive does not significantly affect the mileage of the sports utility vehicle. DOE++ displays this result in the ANOVA table, as shown in the following figure. In the absence of the interaction, the analysis of main effects becomes important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{A}}= &amp;amp; \frac{M{{S}_{A}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{A}}/dof(S{{S}_{A}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{4.5811/2}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 27.86  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{A}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.99997 \\ &lt;br /&gt;
= &amp;amp; 0.00003  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (or speed) has a significant effect on the mileage.&lt;br /&gt;
&lt;br /&gt;
The test statistic for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{B}}= &amp;amp; \frac{M{{S}_{B}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{4.9089/1}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 59.7  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{B}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.999995 \\ &lt;br /&gt;
= &amp;amp; 0.000005  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}:{{\delta }_{j}}=0\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or fuel additive type) has a significant effect on the mileage.&lt;br /&gt;
Therefore, it can be concluded that speed and fuel additive type affect the mileage of the vehicle significantly. The results are displayed in the ANOVA table of the following figure. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.12.png|thumb|center|645px|Analysis results for the experiment in the fifth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Effect Coefficients====&lt;br /&gt;
&lt;br /&gt;
Results for the effect coefficients of the model of the regression version of the ANOVA model are displayed in the Regression Information table in the following figure. Calculations of the results in this table are discussed next. The effect coefficients can be calculated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\hat{\beta }= &amp;amp; {{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y \\ &lt;br /&gt;
= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   18.2889  \\&lt;br /&gt;
   -0.2056  \\&lt;br /&gt;
   0.6944  \\&lt;br /&gt;
   -0.5222  \\&lt;br /&gt;
   0.0056  \\&lt;br /&gt;
   0.1389  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\hat{\mu }=18.2889\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}=-0.2056\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\hat{\tau }}_{2}}=0.6944\,\!&amp;lt;/math&amp;gt; etc. As mentioned previously, these coefficients are displayed as Intercept, A[1] and A[2] respectively depending on the name of the factor used in the experimental design. The standard error for each of these estimates is obtained using the diagonal elements of the variance-covariance matrix &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
C= &amp;amp; {{{\hat{\sigma }}}^{2}}{{({{X}^{\prime }}X)}^{-1}} \\ &lt;br /&gt;
= &amp;amp; M{{S}_{E}}\cdot {{({{X}^{\prime }}X)}^{-1}} \\ &lt;br /&gt;
= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   0.0046 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0.0091 &amp;amp; -0.0046 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; -0.0046 &amp;amp; 0.0091 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0.0046 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0.0091 &amp;amp; -0.0046  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; -0.0046 &amp;amp; 0.0091  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, the standard error for &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
se({{{\hat{\tau }}}_{1}})= &amp;amp; \sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; \sqrt{0.0091} \\ &lt;br /&gt;
= &amp;amp; 0.0956  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\hat{\tau }}}_{1}}}{se({{{\hat{\tau }}}_{1}})} \\ &lt;br /&gt;
= &amp;amp; \frac{-0.2056}{0.0956} \\ &lt;br /&gt;
= &amp;amp; -2.1506  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic is:&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Confidence intervals on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; can also be calculated. The 90% limits on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\hat{\tau }}}_{1}}\pm {{t}_{\alpha /2,n-(k+1)}}\sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; {{\tau }_{1}}\pm {{t}_{0.05,12}}\sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; -0.2056\pm 0.1704  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Thus, the 90% limits on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-0.3760\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-0.0352\,\!&amp;lt;/math&amp;gt; respectively. Results for other coefficients are obtained in a similar manner.&lt;br /&gt;
&lt;br /&gt;
===Least Squares Means===&lt;br /&gt;
The estimated mean response corresponding to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of any factor is obtained using the adjusted estimated mean which is also called the least squares mean. For example, the mean response corresponding to the first level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\mu +{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;. An estimate of this is &amp;lt;math&amp;gt;\hat{\mu }+{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; or (&amp;lt;math&amp;gt;18.2889+(-0.2056)=18.0833\,\!&amp;lt;/math&amp;gt;). Similarly, the estimated response at the third level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\hat{\mu }+{{\hat{\tau }}_{3}}\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\hat{\mu }+(-{{\hat{\tau }}_{1}}-{{\hat{\tau }}_{2}})\,\!&amp;lt;/math&amp;gt; or (&amp;lt;math&amp;gt;18.2889+(0.2056-0.6944)=17.8001\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
As in the case of single factor experiments, plots of residuals can also be used to check for model adequacy in factorial experiments. Box-Cox transformations are also available in DOE++ for factorial experiments.&lt;br /&gt;
&lt;br /&gt;
==Factorial Experiments with a Single Replicate==&lt;br /&gt;
&lt;br /&gt;
If a factorial experiment is run only for a single replicate then it is not possible to test hypotheses about the main effects and interactions as the error sum of squares cannot be obtained.  This is because the number of observations in a single replicate equals the number of terms in the ANOVA model. Hence the model fits the data perfectly and no degrees of freedom are available to obtain the error sum of squares. For example, if the two factor experiment to study the effect of speed and fuel additive type on mileage was run only as a single replicate there would be only six response values. The regression version of the ANOVA model has six terms and therefore will fit the six response values perfectly. The error sum of squares, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt;, for this case will be equal to zero. In some single replicate factorial experiments it is possible to assume that the interaction effects are negligible. In this case, the interaction mean square can be used as error mean square, &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt;, to test hypotheses about the main effects. However, such assumptions are not applicable in all cases and should be used carefully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Blocking==&lt;br /&gt;
&lt;br /&gt;
Many times a factorial experiment requires so many runs that not all of them can be completed under homogeneous conditions. This may lead to inclusion of the effects of &#039;&#039;nuisance factors&#039;&#039; into the investigation. Nuisance factors are factors that have an effect on the response but are not of primary interest to the investigator. For example, two replicates of a two factor factorial experiment require eight runs. If four runs require the duration of one day to be completed, then the total experiment will require two days to be completed. The difference in the conditions on the two days may introduce effects on the response that are not the result of the two factors being investigated. Therefore, the day is a nuisance factor for this experiment.&lt;br /&gt;
Nuisance factors can be accounted for using &#039;&#039;blocking&#039;&#039;. In blocking, experimental runs are separated based on levels of the nuisance factor. For the case of the two factor factorial experiment (where the day is a nuisance factor), separation can be made into two groups or &#039;&#039;blocks&#039;&#039;: runs that are carried out on the first day belong to block 1, and runs that are carried out on the second day belong to block 2. Thus, within each block conditions are the same with respect to the nuisance factor. As a result, each block investigates the effects of the factors of interest, while the difference in the blocks measures the effect of the nuisance factor. &lt;br /&gt;
For the example of the two factor factorial experiment, a possible assignment of runs to the blocks could be as follows: one replicate of the experiment is assigned to block 1 and the second replicate is assigned to block 2 (now each block contains all possible treatment combinations). Within each block, runs are subjected to randomization (i.e., randomization is now restricted to the runs within a block). Such a design, where each block contains one complete replicate and the treatments within a block are subjected to randomization, is called &#039;&#039;randomized complete block design&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
In summary, blocking should always be used to account for the effects of nuisance factors if it is not possible to hold the nuisance factor at a constant level through all of the experimental runs. Randomization should be used within each block to counter the effects of any unknown variability that may still be present.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
Consider the experiment of the fifth table where the mileage of a sports utility vehicle was investigated for the effects of speed and fuel additive type. Now assume that the three replicates for this experiment were carried out on three different vehicles. To ensure that the variation from one vehicle to another does not have an effect on the analysis, each vehicle is considered as one block. See the experiment design in the following figure.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.13.png|thumb|center|643px|Randomized complete block design for the experiment in the fifth table using three blocks.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purpose of the analysis, the block is considered as a main effect except that it is assumed that interactions between the block and the other main effects do not exist. Therefore, there is one block main effect (having three levels - block 1, block 2 and block 3), two main effects (speed -having three levels; and fuel additive type - having two levels) and one interaction effect (speed-fuel additive interaction) for this experiment. Let &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; represent the block effects. The hypothesis test on the block main effect checks if there is a significant variation from one vehicle to the other. The statements for the hypothesis test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\zeta }_{1}}={{\zeta }_{2}}={{\zeta }_{3}}=0\text{   (no main effect of block)} \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\zeta }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The test statistic for this test is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{Block}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{Block}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the block main effect and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. The hypothesis statements and test statistics to test the significance of factors &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed), &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (fuel additive) and the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; (speed-fuel additive interaction) can be obtained as explained in the [[ANOVA_for_Designed_Experiments#Example_2| example]]. The ANOVA model for this example can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\zeta }_{i}}+{{\tau }_{j}}+{{\delta }_{k}}+{{(\tau \delta )}_{jk}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean effect&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of the block (&amp;lt;math&amp;gt;i=1,2,3\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;j=1,2,3\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;k=1,2\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*and &amp;lt;math&amp;gt;{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt; represents the random error terms (which are assumed to be normally distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to calculate the test statistics, it is convenient to express the ANOVA model of the equation given above in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;. This can be done as explained next.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; are defined as deviations from the overall mean, the following constraints exist.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{i=1}{\overset{3}{\mathop \sum }}\,{{\zeta }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\zeta }_{1}}+{{\zeta }_{2}}+{{\zeta }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\zeta }_{3}}=-({{\zeta }_{1}}+{{\zeta }_{2}})\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of the blocks can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{\zeta }_{1}}={{\zeta }_{2}}=0\,\!&amp;lt;/math&amp;gt;.) In DOE++, the independent block effects, &amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as Block[1] and Block[2], respectively.&lt;br /&gt;
&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{j=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{j}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;. The independent effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1] and A[2], respectively.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{k=1}{\overset{2}{\mathop \sum }}\,{{\delta }_{k}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\delta }_{1}}+{{\delta }_{2}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only one of the &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; effects is independent. Assuming that &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is independent, &amp;lt;math&amp;gt;{{\delta }_{2}}=-{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;. The independent effect, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, is displayed as B:B.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{j=1}{\overset{3}{\mathop \sum }}\,{{(\tau \delta )}_{jk}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{and   }\underset{k=1}{\overset{2}{\mathop \sum }}\,{{(\tau \delta )}_{jk}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{31}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{12}}+{{(\tau \delta )}_{22}}+{{(\tau \delta )}_{32}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{and   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{12}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{21}}+{{(\tau \delta )}_{22}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{31}}+{{(\tau \delta )}_{32}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last five equations given above represent four constraints as only four of the five equations are independent. Therefore, only two out of the six &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are independent, we can express the other four effects in terms of these effects. The independent effects, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1]B and A[2]B, respectively.&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be obtained using indicator variables. Since the block has three levels, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required, which need to be coded as shown next: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Block 1}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0\text{ } \\ &lt;br /&gt;
 &amp;amp; \text{Block 2}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{         } \\ &lt;br /&gt;
 &amp;amp; \text{Block 3}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{   }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; has three levels and two indicator variables, &amp;lt;math&amp;gt;{{x}_{3}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{4}}\,\!&amp;lt;/math&amp;gt;, are required:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{3}}=1,\text{   }{{x}_{4}}=0 \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{3}}=0,\text{   }{{x}_{4}}=1\text{           } \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{3}}=-1,\text{   }{{x}_{4}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has two levels and can be represented using one indicator variable, &amp;lt;math&amp;gt;{{x}_{5}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Treatment Effect }{{\delta }_{1}}: &amp;amp; {{x}_{5}}=1 \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\delta }_{2}}: &amp;amp; {{x}_{5}}=-1  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; interaction will be represented by &amp;lt;math&amp;gt;{{x}_{3}}{{x}_{5}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{4}}{{x}_{5}}\,\!&amp;lt;/math&amp;gt;. The regression version of the ANOVA model can finally be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{\zeta }_{1}}\cdot {{x}_{1}}+{{\zeta }_{2}}\cdot {{x}_{2}}+{{\tau }_{1}}\cdot {{x}_{3}}+{{\tau }_{2}}\cdot {{x}_{4}}+{{\delta }_{1}}\cdot {{x}_{5}}+{{(\tau \delta )}_{11}}\cdot {{x}_{3}}{{x}_{5}}+{{(\tau \delta )}_{21}}\cdot {{x}_{4}}{{x}_{5}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In matrix notation this model can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:or:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   17.3  \\&lt;br /&gt;
   18.9  \\&lt;br /&gt;
   17.1  \\&lt;br /&gt;
   18.7  \\&lt;br /&gt;
   19.1  \\&lt;br /&gt;
   18.8  \\&lt;br /&gt;
   17.8  \\&lt;br /&gt;
   18.2  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18.3  \\&lt;br /&gt;
\end{matrix} \right]=\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; -1 &amp;amp; 0 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\zeta }_{1}}  \\&lt;br /&gt;
   {{\zeta }_{2}}  \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
   {{\delta }_{1}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{11}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{21}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{111}}  \\&lt;br /&gt;
   {{\epsilon }_{121}}  \\&lt;br /&gt;
   {{\epsilon }_{131}}  \\&lt;br /&gt;
   {{\epsilon }_{112}}  \\&lt;br /&gt;
   {{\epsilon }_{122}}  \\&lt;br /&gt;
   {{\epsilon }_{132}}  \\&lt;br /&gt;
   {{\epsilon }_{211}}  \\&lt;br /&gt;
   {{\epsilon }_{221}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{332}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;, the sum of squares for the ANOVA model and the extra sum of squares for each of the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Sum of Squares for the Model====&lt;br /&gt;
&lt;br /&gt;
The model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, for the ANOVA model of this example can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 9.9256  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since seven effect terms (&amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;) are used in the model the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; is seven (&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=7\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The total sum of squares can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 10.7178  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are 18 observed response values, the number of degrees of freedom associated with the total sum of squares is 17 (&amp;lt;math&amp;gt;dof(S{{S}_{T}})=17\,\!&amp;lt;/math&amp;gt;). The error sum of squares can now be obtained:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 10.7178-9.9256 \\ &lt;br /&gt;
= &amp;amp; 0.7922  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 17-7 \\ &lt;br /&gt;
= &amp;amp; 10  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are no true replicates of the treatments (as can be seen from the design of the previous figure, where all of the treatments are seen to be run just once), all of the error sum of squares is the sum of squares due to lack of fit. The lack of fit arises because the model used is not a full model since it is assumed that there are no interactions between blocks and other effects.&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Extra Sum of Squares for the Factors====&lt;br /&gt;
&lt;br /&gt;
The sequential sum of squares for the blocks can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{Block}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}})-S{{S}_{TR}}(\mu ) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y-0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones, &amp;lt;math&amp;gt;{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the hat matrix, which is calculated using &amp;lt;math&amp;gt;{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}={{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}{{(X_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}^{\prime }{{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}})}^{-1}}X_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}^{\prime }\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the matrix containing only the first three columns of the &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrix. Thus&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{Block}}= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y-0 \\ &lt;br /&gt;
= &amp;amp; 0.1944-0 \\ &lt;br /&gt;
= &amp;amp; 0.1944  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent block effects,    and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{Blocks}}\,\!&amp;lt;/math&amp;gt; is two (&amp;lt;math&amp;gt;dof(S{{S}_{Blocks}})=2\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Similarly, the sequential sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}},{{\tau }_{1}},{{\tau }_{2}})-S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}}) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}},{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-{{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 4.7756-0.1944 \\ &lt;br /&gt;
= &amp;amp; 4.5812  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sequential sum of squares for the other effects are obtained as &amp;lt;math&amp;gt;S{{S}_{B}}=4.9089\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{AB}}=0.2411\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Test Statistics====&lt;br /&gt;
&lt;br /&gt;
Knowing the sum of squares, the test statistics for each of the factors can be calculated. For example, the test statistic for the main effect of the blocks is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{Block}}= &amp;amp; \frac{M{{S}_{Block}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{Block}}/dof(S{{S}_{Blocks}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{0.1944/2}{0.7922/10} \\ &lt;br /&gt;
= &amp;amp; 1.227  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 10 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{Block}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.6663 \\ &lt;br /&gt;
= &amp;amp; 0.3337  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;gt; 0.1, we fail to reject &amp;lt;math&amp;gt;{{H}_{0}}:{{\zeta }_{i}}=0\,\!&amp;lt;/math&amp;gt; and conclude that there is no significant variation in the mileage from one vehicle to the other. Statistics to test the significance of other factors can be calculated in a similar manner. The complete analysis results obtained from DOE++ for this experiment are presented in the following figure.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.14.png|thumb|center|644px|Analysis results for the experiment in the [[ANOVA_for_Designed_Experiments#Example_3| example]].]]&lt;br /&gt;
&lt;br /&gt;
==Use of Regression to Calculate Sum of Squares==&lt;br /&gt;
&lt;br /&gt;
This section explains the reason behind the use of regression in DOE++ in all calculations related to the sum of squares. A number of textbooks present the method of direct summation to calculate the sum of squares. But this method is only applicable for balanced designs and may give incorrect results for unbalanced designs. For example, the sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; in a balanced factorial experiment with two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, is given as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{A}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,{{n}_{b}}n{{({{{\bar{y}}}_{i..}}-{{{\bar{y}}}_{...}})}^{2}} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{b}}n}-\frac{y_{...}^{2}}{{{n}_{a}}{{n}_{b}}n}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{n}_{b}}\,\!&amp;lt;/math&amp;gt; represents the levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; represents the number of samples for each combination of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The term &amp;lt;math&amp;gt;{{\bar{y}}_{i..}}\,\!&amp;lt;/math&amp;gt; is the mean value for the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{y}_{i..}}\,\!&amp;lt;/math&amp;gt; is the sum of all observations at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{y}_{...}}\,\!&amp;lt;/math&amp;gt; is the sum of all observations.&lt;br /&gt;
&lt;br /&gt;
The analogous term to calculate &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; in the case of an unbalanced design is given as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{A}}=\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\frac{y_{...}^{2}}{{{n}_{..}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{n}_{i.}}\,\!&amp;lt;/math&amp;gt; is the number of observations at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{n}_{..}}\,\!&amp;lt;/math&amp;gt; is the total number of observations. Similarly, to calculate the sum of squares for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;, the formulas are given as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{B}}= &amp;amp; \underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}-\frac{y_{...}^{2}}{{{n}_{..}}} \\ &lt;br /&gt;
 &amp;amp; S{{S}_{AB}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{ij.}^{2}}{{{n}_{ij}}}-\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}+\frac{y_{...}^{2}}{{{n}_{..}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Applying these relations to the unbalanced data of the last table, the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{AB}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{ij.}^{2}}{{{n}_{ij}}}-\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}+\frac{y_{...}^{2}}{{{n}_{..}}} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; \left( {{6}^{2}}+{{4}^{2}}+\frac{{{(42+6)}^{2}}}{2}+{{12}^{2}} \right)-\left( \frac{{{10}^{2}}}{2}+\frac{{{60}^{2}}}{3} \right) \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; -\left( \frac{{{54}^{2}}}{3}+\frac{{{16}^{2}}}{2} \right)+\frac{{{70}^{2}}}{5} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; -22  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
which is obviously incorrect since the sum of squares cannot be negative. For a detailed discussion on this refer to [[DOE References|Searle(1997, 1971)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.6.png|thumb|center|400px|Example of an unbalanced design.]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
The correct sum of squares can be calculated as shown next. The &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrices for the design of the last table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   4  \\&lt;br /&gt;
   6  \\&lt;br /&gt;
   12  \\&lt;br /&gt;
   42  \\&lt;br /&gt;
\end{matrix} \right]\text{   and   }X=\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{AB}}={{y}^{\prime }}[H-(1/5)J]y-{{y}^{\prime }}[{{H}_{\tilde{\ }AB}}-(1/5)J]y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones. The matrix &amp;lt;math&amp;gt;{{H}_{\tilde{\ }AB}}\,\!&amp;lt;/math&amp;gt; can be calculated using &amp;lt;math&amp;gt;{{H}_{\tilde{\ }AB}}={{X}_{\tilde{\ }AB}}{{(X_{\tilde{\ }AB}^{\prime }{{X}_{\tilde{\ }AB}})}^{-1}}X_{\tilde{\ }AB}^{\prime }\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;{{X}_{\tilde{\ }AB}}\,\!&amp;lt;/math&amp;gt; is the design matrix, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, excluding the last column that represents the interaction effect &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;. Thus, the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{AB}}= &amp;amp; {{y}^{\prime }}[H-(1/5)J]y-{{y}^{\prime }}[{{H}_{\tilde{\ }AB}}-(1/5)J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 368-339.4286 \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 28.5714  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the value that is calculated by DOE++ (see the first figure below, for the experiment design and the second figure below for the analysis).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.15.png|thumb|center|471px|Unbalanced experimental design for the data in the last table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.16.png|thumb|center|471px|Analysis for the unbalanced data in the last table.]]&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=ANOVA_for_Designed_Experiments&amp;diff=65257</id>
		<title>ANOVA for Designed Experiments</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=ANOVA_for_Designed_Experiments&amp;diff=65257"/>
		<updated>2017-08-26T01:14:51Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Box-Cox Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Doebook|5}}&lt;br /&gt;
In [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], methods were presented to model the relationship between a response and the associated factors (referred to as predictor variables in the context of regression) based on an observed data set. Such studies, where observed values of the response are used to establish an association between the response and the factors, are called &#039;&#039;observational studies&#039;&#039;. However, in the case of observational studies, it is difficult to establish a cause-and-effect relationship between the observed factors and the response. This is because a number of alternative justifications can be used to explain the observed change in the response values. For example, a regression model fitted to data on the population of cities and road accidents might show a positive regression relation. However, this relation does not imply that an increase in a city&#039;s population causes an increase in road accidents. It could be that a number of other factors such as road conditions, traffic control and the degree to which the residents of the city follow the traffic rules affect the number of road accidents in the city and the increase in the number of accidents seen in the study is caused by these factors. Since the observational study does not take the effect of these factors into account, the assumption that an increase in a city&#039;s population will lead to an increase in road accidents is not a valid one. For example, the population of a city may increase but road accidents in the city may decrease because of better traffic control. To establish a cause-and-effect relationship, the study should be conducted in such a way that the effect of all other factors is excluded from the investigation.&lt;br /&gt;
&lt;br /&gt;
The studies that enable the establishment of a cause-and-effect relationship are called &#039;&#039;experiments&#039;&#039;. In experiments the response is investigated by studying only the effect of the factor(s) of interest and excluding all other effects that may provide alternative justifications to the observed change in response. This is done in two ways. First, the levels of the factors to be investigated are carefully selected and then strictly controlled during the execution of the experiment. The aspect of selecting what factor levels should be investigated in the experiment is called the &#039;&#039;design&#039;&#039; of the experiment. The second distinguishing feature of experiments is that observations in an experiment are recorded in a random order. By doing this, it is hoped that the effect of all other factors not being investigated in the experiment will get cancelled out so that the change in the response is the result of only the investigated factors. Using these two techniques, experiments tend to ensure that alternative justifications to observed changes in the response are voided, thereby enabling the establishment of a cause-and-effect relationship between the response and the investigated factors.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Randomization&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The aspect of recording observations in an experiment in a random order is referred to as &#039;&#039;randomization&#039;&#039;. Specifically, randomization is the process of assigning the various levels of the investigated factors to the experimental units in a random fashion.  An experiment is said to be &#039;&#039;completely randomized&#039;&#039; if the probability of an experimental unit to be subjected to any level of a factor is equal for all the experimental units. The importance of randomization can be illustrated using an example. Consider an experiment where the effect of the speed of a lathe machine on the surface finish of a product is being investigated. In order to save time, the experimenter records surface finish values by running the lathe machine continuously and recording observations in the order of increasing speeds. The analysis of the experiment data shows that an increase in lathe speeds causes a decrease in the quality of surface finish. However the results of the experiment are disputed by the lathe operator who claims that he has been able to obtain better surface finish quality in the products by operating the lathe machine at higher speeds. It is later found that the faulty results were caused because of overheating of the tool used in the machine. Since the lathe was run continuously in the order of increased speeds the observations were recorded in the order of increased tool temperatures. This problem could have been avoided if the experimenter had randomized the experiment and taken reading at the various lathe speeds in a random fashion. This would require the experimenter to stop and restart the machine at every observation, thereby keeping the temperature of the tool within a reasonable range. Randomization would have ensured that the effect of heating of the machine tool is not included in the experiment.&lt;br /&gt;
&lt;br /&gt;
==Analysis of Single Factor Experiments==&lt;br /&gt;
&lt;br /&gt;
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]], the analysis of observational studies involves the use of regression models. The analysis of experimental studies involves the use of analysis of variance (ANOVA) models. For a comparison of the two models see [[ANOVA_for_Designed_Experiments#Fitting_ANOVA_Models|Fitting ANOVA Models]]. In single factor experiments, ANOVA models are used to compare the mean response values at different levels of the factor. Each level of the factor is investigated to see if the response is significantly different from the response at other levels of the factor.  The analysis of single factor experiments is often referred to as &#039;&#039;one-way ANOVA&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To illustrate the use of ANOVA models in the analysis of experiments, consider a single factor experiment where the analyst wants to see if the surface finish of certain parts is affected by the speed of a lathe machine. Data is collected for three speeds (or three treatments). Each treatment is replicated four times. Therefore, this experiment design is balanced. Surface finish values recorded using randomization are shown in the following table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.1.png|thumb|center|400px|Surface finish values for three speeds of a lathe machine.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be stated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}={{\mu }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model assumes that the response at each factor level, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, is the sum of the mean response at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, and a random error term, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. The subscript &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; denotes the factor level while the subscript &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; denotes the replicate. If there are &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of the factor and &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates at each level then &amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j=1,2,...,m\,\!&amp;lt;/math&amp;gt;. The random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are assumed to be normally and independently distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. Therefore, the response at each level can be thought of as a normally distributed population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and constant variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;. The equation given above is referred to as the &#039;&#039;means model&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The ANOVA model of the means model can also be written using &amp;lt;math&amp;gt;{{\mu }_{i}}=\mu +{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the effect due to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Such an ANOVA model is called the &#039;&#039;effects model&#039;&#039;.  In the effects models the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, represent the deviations from the overall mean, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;. Therefore, the following constraint exists on the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;s: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Fitting ANOVA Models===&lt;br /&gt;
&lt;br /&gt;
To fit ANOVA models and carry out hypothesis testing in single factor experiments, it is convenient to express the effects model of the effects model in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt; (that was used for multiple linear regression models in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This can be done as shown next. Using the effects model, the ANOVA model for the single factor experiment in the first table can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean and &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment effect. There are three treatments in the first table (500, 600 and 700). Therefore, there are three treatment effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{3}}\,\!&amp;lt;/math&amp;gt;. The following constraint exists for these effects:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   } {{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the first treatment, the ANOVA model for the single factor experiment in the above table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{1j}}=\mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+0\cdot {{\tau }_{3}}+{{\epsilon }_{1j}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;, the model for the first treatment is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}-0\cdot ({{\tau }_{1}}+{{\tau }_{2}})+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{or   }{{Y}_{1j}}= &amp;amp; \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Models for the second and third treatments can be obtained in a similar way. The models for the three treatments are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{First Treatment}: &amp;amp; {{Y}_{1j}}=1\cdot \mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}} \\ &lt;br /&gt;
\text{Second Treatment}: &amp;amp; {{Y}_{2j}}=1\cdot \mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{2j}} \\ &lt;br /&gt;
\text{Third Treatment}: &amp;amp; {{Y}_{3j}}=1\cdot \mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{3j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coefficients of the treatment effects &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; can be expressed using two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using the indicator variables &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, the ANOVA model for the data in the first table now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{x}_{1}}\cdot {{\tau }_{1}}+{{x}_{2}}\cdot {{\tau }_{2}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation can be rewritten by including subscripts &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; (for the level of the factor) and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; (for the replicate number) as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ij}}=\mu +{{x}_{i1}}\cdot {{\tau }_{1}}+{{x}_{i2}}\cdot {{\tau }_{2}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The equation given above represents the &amp;quot;regression version&amp;quot; of the ANOVA model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Treat Numerical Factors as Qualitative or Quantitative ?===&lt;br /&gt;
&lt;br /&gt;
It can be seen from the equation given above that in an ANOVA model each factor is treated as a qualitative factor. In the present example the factor, lathe speed, is a quantitative factor with three levels. But the ANOVA model treats this factor as a qualitative factor with three levels. Therefore, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required to represent this factor. &lt;br /&gt;
&lt;br /&gt;
Note that in a regression model a variable can either be treated as a quantitative or a qualitative variable. The factor, lathe speed, would be used as a quantitative factor and represented with a single predictor variable in a regression model. For example, if a first order model were to be fitted to the data in the first table, then the regression model would take the form &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. If a second order regression model were to be fitted, the regression model would be &amp;lt;math&amp;gt;{{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}x_{i1}^{2}+{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;. Notice that unlike these regression models, the regression version of the ANOVA model does not make any assumption about the nature of relationship between the response and the factor being investigated.&lt;br /&gt;
&lt;br /&gt;
The choice of treating a particular factor as a quantitative or qualitative variable depends on the objective of the experimenter. In the case of the data of the first table, the objective of the experimenter is to compare the levels of the factor to see if change in the levels leads to a significant change in the response.  The objective is not to make predictions on the response for a given level of the factor. Therefore, the factor is treated as a qualitative factor in this case. If the objective of the experimenter were prediction or optimization, the experimenter would focus on aspects such as the nature of relationship between the factor, lathe speed, and the response, surface finish, so that the factor should be modeled as a quantitative factor to make accurate predictions.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model in the Form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be expanded for the three treatments and four replicates of the data in the first table as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{Y}_{11}}= &amp;amp; 6=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{11}}\text{    Level 1, Replicate 1} \\ &lt;br /&gt;
{{Y}_{21}}= &amp;amp; 13=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{21}}\text{  Level 2, Replicate 1} \\ &lt;br /&gt;
{{Y}_{31}}= &amp;amp; 23=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{31}}\text{  Level 3, Replicate 1} \\ &lt;br /&gt;
{{Y}_{12}}= &amp;amp; 13=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{12}}\text{  Level 1, Replicate 2} \\ &lt;br /&gt;
{{Y}_{22}}= &amp;amp; 16=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{22}}\text{  Level 2, Replicate 2} \\ &lt;br /&gt;
{{Y}_{32}}= &amp;amp; 20=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{32}}\text{  Level 3, Replicate 2} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; ... \\ &lt;br /&gt;
{{Y}_{34}}= &amp;amp; 18=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{34}}\text{  Level 3, Replicate 4}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The corresponding matrix notation is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{11}}  \\&lt;br /&gt;
   {{Y}_{21}}  \\&lt;br /&gt;
   {{Y}_{31}}  \\&lt;br /&gt;
   {{Y}_{12}}  \\&lt;br /&gt;
   {{Y}_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{34}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y= &amp;amp; X\beta +\epsilon  \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   23  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   16  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{11}}  \\&lt;br /&gt;
   {{\epsilon }_{21}}  \\&lt;br /&gt;
   {{\epsilon }_{31}}  \\&lt;br /&gt;
   {{\epsilon }_{12}}  \\&lt;br /&gt;
   {{\epsilon }_{22}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{34}}  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The matrices &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are used in the calculation of the sum of squares in the next section. The data in the first table can be entered into DOE++ as shown in the figure below.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.1.png|thumb|center|550px|Single factor experiment design for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Test in Single Factor Experiments===&lt;br /&gt;
&lt;br /&gt;
The hypothesis test in single factor experiments examines the ANOVA model to see if the response at any level of the investigated factor is significantly different from that at the other levels. If this is not the case and the response at all levels is not significantly different, then it can be concluded that the investigated factor does not affect the response. The test on the ANOVA model is carried out by checking to see if any of the treatment effects, &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, are non-zero. The test is similar to the test of significance of regression mentioned in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] in the context of regression models. The hypotheses statements for this test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0 \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\tau }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test for &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is carried out using the following statistic:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{TR}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the ANOVA model and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. Note that in the case of ANOVA models we use the notation &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment mean square) for the model mean square and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; (treatment sum of squares) for the model sum of squares (instead of &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression mean square, and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt;, regression sum of squares, used in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] and [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]). This is done to indicate that the model under consideration is the ANOVA model and not the regression model. The calculations to obtain &amp;lt;math&amp;gt;M{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; are identical to the calculations to obtain &amp;lt;math&amp;gt;M{{S}_{R}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{R}}\,\!&amp;lt;/math&amp;gt; explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The sum of squares to obtain the statistic &amp;lt;math&amp;gt;{{F}_{0}}\,\!&amp;lt;/math&amp;gt; can be calculated as explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. Using the data in the first table, the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.1667 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.1667 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.1667  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 232.1667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the number of levels of the factor, &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; represents the replicates at each level, &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; represents the vector of the response values, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; represents the matrix of ones. (For details on each of these terms, refer to [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].)&lt;br /&gt;
Since two effect terms, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are used in the regression version of the ANOVA model, the degrees of freedom associated with the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, is two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be obtained as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}&lt;br /&gt;
   0.9167 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   -0.0833 &amp;amp; 0.9167 &amp;amp; . &amp;amp; . &amp;amp; -0.0833  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   -0.0833 &amp;amp; -0.0833 &amp;amp; . &amp;amp; . &amp;amp; 0.9167  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   13  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18  \\&lt;br /&gt;
\end{matrix} \right] \\ &lt;br /&gt;
= &amp;amp; 306.6667  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the previous equation, &amp;lt;math&amp;gt;I\,\!&amp;lt;/math&amp;gt; is the identity matrix. Since there are 12 data points in all, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; is 11. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;dof(S{{S}_{T}})=11\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 306.6667-232.1667 \\ &lt;br /&gt;
= &amp;amp; 74.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 11-2 \\ &lt;br /&gt;
= &amp;amp; 9  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic can now be calculated using the equation given in [[ANOVA_for_Designed_Experiments#Hypothesis_Test_in_Single_Factor_Experiments|Hypothesis Test in Single Factor Experiments]] as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{0}}= &amp;amp; \frac{M{{S}_{TR}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{TR}}/dof(S{{S}_{TR}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{232.1667/2}{74.5/9} \\ &lt;br /&gt;
= &amp;amp; 14.0235  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value for the statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 9 degrees of freedom in the denominator is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{f}_{0}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.9983 \\ &lt;br /&gt;
= &amp;amp; 0.0017  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that change in the lathe speed has a significant effect on the surface finish. DOE++ displays these results in the ANOVA table, as shown in the figure below. The values of S and R-sq are the standard error and the coefficient of determination for the model, respectively. These values are explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]] and indicate how well the model fits the data. The values in the figure below indicate that the fit of the ANOVA model is fair.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.2.png|thumb|center|650px|ANOVA table for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the &#039;&#039;i&#039;&#039;&amp;lt;sup&amp;gt;th&amp;lt;/sup&amp;gt; Treatment Mean===&lt;br /&gt;
&lt;br /&gt;
The response at each treatment of a single factor experiment can be assumed to be a normal population with a mean of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt; provided that the error terms can be assumed to be normally distributed. A point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is the average response at each treatment, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;. Since this is a sample average, the associated variance is &amp;lt;math&amp;gt;{{\sigma }^{2}}/{{m}_{i}}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;{{m}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of replicates at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment. Therefore, the confidence interval on &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt; is based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution. Recall from [[Statistical_Background_on_DOE| Statistical Background on DOE]] (inference on population mean when variance is unknown) that: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}/{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{M{{S}_{E}}/{{m}_{i}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
has a &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with degrees of freedom &amp;lt;math&amp;gt;=dof(S{{S}_{E}})\,\!&amp;lt;/math&amp;gt;. Therefore, a 100 (&amp;lt;math&amp;gt;1-\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment mean, &amp;lt;math&amp;gt;{{\mu }_{i}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, for the first treatment of the lathe speed we have:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}= &amp;amp; {{{\bar{y}}}_{1\cdot }} \\ &lt;br /&gt;
= &amp;amp; \frac{6+13+7+8}{4} \\ &lt;br /&gt;
= &amp;amp; 8.5  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In DOE++, this value is displayed as the Estimated Mean for the first level, as shown in the Data Summary table in the figure below. The value displayed as the standard deviation for this level is simply the sample standard deviation calculated using the observations corresponding to this level.  The 90% confidence interval for this treatment is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{M{{S}_{E}}}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833\sqrt{\frac{(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 1.833(1.44) \\ &lt;br /&gt;
= &amp;amp; 8.5\pm 2.64  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}\,\!&amp;lt;/math&amp;gt; are 5.9 and 11.1, respectively. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.3.png|thumb|center|650px|Data Summary table for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Confidence Interval on the Difference in Two Treatment Means===&lt;br /&gt;
&lt;br /&gt;
The confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is used to compare two levels of the factor at a given significance. If the confidence interval does not include the value of zero, it is concluded that the two levels of the factor are significantly different. The point estimator of &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt;. The variance for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})= &amp;amp; var({{{\bar{y}}}_{i\cdot }})+var({{{\bar{y}}}_{j\cdot }}) \\ &lt;br /&gt;
= &amp;amp; {{\sigma }^{2}}/{{m}_{i}}+{{\sigma }^{2}}/{{m}_{j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For balanced designs all &amp;lt;math&amp;gt;{{m}_{i}}=m\,\!&amp;lt;/math&amp;gt;. Therefore:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})=2{{\sigma }^{2}}/m\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The standard deviation for &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!&amp;lt;/math&amp;gt; can be obtained by taking the square root of &amp;lt;math&amp;gt;var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})\,\!&amp;lt;/math&amp;gt; and is referred to as the pooled standard error:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for the difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{T}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2{{{\hat{\sigma }}}^{2}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2M{{S}_{E}}/m}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then a 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence interval on the difference in two treatment means, &amp;lt;math&amp;gt;{{\mu }_{i}}-{{\mu }_{j}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, an estimate of the difference in the first and second treatment means of the lathe speed, &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt;, is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{{\hat{\mu }}}_{1}}-{{{\hat{\mu }}}_{2}}= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }} \\ &lt;br /&gt;
= &amp;amp; 8.5-13.25 \\ &lt;br /&gt;
= &amp;amp; -4.75  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The pooled standard error for this difference is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Pooled\text{ }Std.\text{ }Error= &amp;amp; \sqrt{var({{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }})} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2{{{\hat{\sigma }}}^{2}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{2M{{S}_{E}}/m} \\ &lt;br /&gt;
= &amp;amp; \sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; 2.0344  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To test &amp;lt;math&amp;gt;{{H}_{0}}:{{\mu }_{1}}-{{\mu }_{2}}=0\,\!&amp;lt;/math&amp;gt;, the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}-({{\mu }_{1}}-{{\mu }_{2}})}{\sqrt{2M{{S}_{E}}/m}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75-(0)}{\sqrt{\tfrac{2(74.5/9)}{4}}} \\ &lt;br /&gt;
= &amp;amp; \frac{-4.75}{2.0344} \\ &lt;br /&gt;
= &amp;amp; -2.3348  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In DOE++, the value of the statistic is displayed in the Mean Comparisons table under the column T Value as shown in the figure below. The 90% confidence interval on the difference &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833\sqrt{\frac{2(74.5/9)}{4}} \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 1.833(2.0344) \\ &lt;br /&gt;
= &amp;amp; -4.75\pm 3.729  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hence the 90% limits on &amp;lt;math&amp;gt;{{\mu }_{1}}-{{\mu }_{2}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-8.479\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1.021\,\!&amp;lt;/math&amp;gt;, respectively. These values are displayed under the Low CI and High CI columns in the following figure. Since the confidence interval for this pair of means does not included zero, it can be concluded that these means are significantly different at 90% confidence. This conclusion can also be arrived at using the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value noting that the hypothesis is two-sided. The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to the statistic &amp;lt;math&amp;gt;{{t}_{0}}=-2.3348\,\!&amp;lt;/math&amp;gt;, based on the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; distribution with 9 degrees of freedom is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 2\times (1-P(T\le |{{t}_{0}}|)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-P(T\le 2.3348)) \\ &lt;br /&gt;
= &amp;amp; 2\times (1-0.9778) \\ &lt;br /&gt;
= &amp;amp; 0.0444  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, the means are significantly different at 90% confidence. Bounds on the difference between other treatment pairs can be obtained in a similar manner and it is concluded that all treatments are significantly different.&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
&lt;br /&gt;
Plots of residuals, &amp;lt;math&amp;gt;{{e}_{ij}}\,\!&amp;lt;/math&amp;gt;, similar to the ones discussed in the previous chapters on regression, are used to ensure that the assumptions associated with the ANOVA model are not violated.  The ANOVA model assumes that the random error terms, &amp;lt;math&amp;gt;{{\epsilon }_{ij}}\,\!&amp;lt;/math&amp;gt;, are normally and independently distributed with the same variance for each treatment. The normality assumption can be checked by obtaining a normal probability plot of the residuals. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.4.png|thumb|center|644px|Mean Comparisons table for the data in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equality of variance is checked by plotting residuals against the treatments and the treatment averages, &amp;lt;math&amp;gt;{{\bar{y}}_{i\cdot }}\,\!&amp;lt;/math&amp;gt;  (also referred to as fitted values), and inspecting the spread in the residuals. If a pattern is seen in these plots, then this indicates the need to use a suitable transformation on the response that will ensure variance equality. Box-Cox transformations are discussed in the next section. To check for independence of the random error terms residuals are plotted against time or run-order to ensure that a pattern does not exist in these plots. Residual plots for the given example are shown in the following two figures. The plots show that the assumptions associated with the ANOVA model are not violated.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.5.png|thumb|center|550px|Normal probability plot of residuals for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.6.png|thumb|center|550px|Plot of residuals against fitted values for the single factor experiment in the first table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Box-Cox Method===&lt;br /&gt;
&lt;br /&gt;
Transformations on the response may be used when residual plots for an experiment show a pattern. This indicates that the equality of variance does not hold for the residuals of the given model. The Box-Cox method can be used to automatically identify a suitable power transformation for the data based on the following relationship:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{*}}={{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is determined using the given data such that &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is minimized. The values of &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; are not used as is because of issues related to calculation or comparison of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values for different values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. For example, for &amp;lt;math&amp;gt;\lambda =0\,\!&amp;lt;/math&amp;gt; all response values will become 1. Therefore, the following relationship is used to obtain &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}^{\lambda }}=\{\begin{matrix}&lt;br /&gt;
   \frac{{{Y}^{\lambda }}-1}{\lambda {{{\dot{y}}}^{\lambda -1}}}  \\&lt;br /&gt;
   \dot{y}\ln y  \\&lt;br /&gt;
\end{matrix}\text{    }\begin{matrix}&lt;br /&gt;
   \lambda \ne 0\begin{matrix}&lt;br /&gt;
     \\&lt;br /&gt;
     \\&lt;br /&gt;
\end{matrix}  \\&lt;br /&gt;
   \lambda =0  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\dot{y}={{\ln }^{-1}}[(1/n)\mathop{}_{}^{}\ln y]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Once all &amp;lt;math&amp;gt;{{Y}^{\lambda }}\,\!&amp;lt;/math&amp;gt; values are obtained for a value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, the corresponding &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for these values is obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;. The process is repeated for a number of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values to obtain a plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is selected as the required transformation for the given data. DOE++ plots &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values because the range of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values is large and if this is not done, all values cannot be displayed on the same plot. The range of search for the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value in the software is from &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt;, because larger values of of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are usually not meaningful. DOE++ also displays a recommended transformation based on the best &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value obtained as per the table below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.2.png|thumb|center|400px|Recommended Box-Cox power transformations.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Confidence intervals on the selected &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values are also available. Let &amp;lt;math&amp;gt;S{{S}_{E}}(\lambda )\,\!&amp;lt;/math&amp;gt; be the value of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; corresponding to the selected value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. Then, to calculate the 100 (1- &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;) percent confidence intervals on &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, we need to calculate &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}^{*}}=S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The required limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are the two values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; corresponding to the value &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; (on the plot of &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;). If the limits for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; do not include the value of one, then the transformation is applicable for the given data.&lt;br /&gt;
Note that the power transformations are not defined for response values that are negative or zero. DOE++ deals with negative and zero response values using the following equations (that involve addition of a suitable quantity to all of the response values if a zero or negative response value is encountered). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; y(i)= &amp;amp; y(i)+\left| {{y}_{\min }} \right|\times 1.1\text{        Negative Response} \\ &lt;br /&gt;
 &amp;amp; y(i)= &amp;amp; y(i)+1\text{                          Zero Response}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;math&amp;gt;{{y}_{\min }}\,\!&amp;lt;/math&amp;gt; represents the minimum response value and &amp;lt;math&amp;gt;\left| {{y}_{\min }} \right|\,\!&amp;lt;/math&amp;gt; represents the absolute value of the minimum response. &lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
To illustrate the Box-Cox method, consider the experiment given in the first table. Transformed response values for various values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be calculated using the equation for &amp;lt;math&amp;gt;{Y}^{\lambda}\,\!&amp;lt;/math&amp;gt; given in [[ANOVA_for_Designed_Experiments#Box-Cox_Method|Box-Cox Method]]. Knowing the hat matrix, &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values corresponding to each of these &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values can easily be obtained using &amp;lt;math&amp;gt;{{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; values calculated for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values between &amp;lt;math&amp;gt;-5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;5\,\!&amp;lt;/math&amp;gt; for the given data are shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \lambda  &amp;amp; {} &amp;amp; S{{S}_{E}} &amp;amp; \ln S{{S}_{E}}  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   -5 &amp;amp; {} &amp;amp; 5947.8 &amp;amp; 8.6908  \\&lt;br /&gt;
   -4 &amp;amp; {} &amp;amp; 1946.4 &amp;amp; 7.5737  \\&lt;br /&gt;
   -3 &amp;amp; {} &amp;amp; 696.5 &amp;amp; 6.5461  \\&lt;br /&gt;
   -2 &amp;amp; {} &amp;amp; 282.2 &amp;amp; 5.6425  \\&lt;br /&gt;
   -1 &amp;amp; {} &amp;amp; 135.8 &amp;amp; 4.9114  \\&lt;br /&gt;
   0 &amp;amp; {} &amp;amp; 83.9 &amp;amp; 4.4299  \\&lt;br /&gt;
   1 &amp;amp; {} &amp;amp; 74.5 &amp;amp; 4.3108  \\&lt;br /&gt;
   2 &amp;amp; {} &amp;amp; 101.0 &amp;amp; 4.6154  \\&lt;br /&gt;
   3 &amp;amp; {} &amp;amp; 190.4 &amp;amp; 5.2491  \\&lt;br /&gt;
   4 &amp;amp; {} &amp;amp; 429.5 &amp;amp; 6.0627  \\&lt;br /&gt;
   5 &amp;amp; {} &amp;amp; 1057.6 &amp;amp; -6.9638  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A plot of &amp;lt;math&amp;gt;\ln S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; for various &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values, as obtained from DOE++, is shown in the following figure. The value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; that gives the minimum &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is identified as 0.7841. The &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt; value corresponding to this value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is 73.74. A 90% confidence interval on this &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; value is calculated as follows. &amp;lt;math&amp;gt;S{{S}^{*}}\,\!&amp;lt;/math&amp;gt; can be obtained as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}^{*}}= &amp;amp; S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{t_{0.05,9}^{2}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 73.74\left( 1+\frac{{{1.833}^{2}}}{9} \right) \\ &lt;br /&gt;
= &amp;amp; 101.27  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\ln S{{S}^{*}}=4.6178\,\!&amp;lt;/math&amp;gt;. The &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values corresponding to this value from the following figure are &amp;lt;math&amp;gt;-0.4686\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0052\,\!&amp;lt;/math&amp;gt;. Therefore, the 90% confidence limits on are &amp;lt;math&amp;gt;-0.4689\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;2.0055\,\!&amp;lt;/math&amp;gt;. Since the confidence limits include the value of 1, this indicates that a transformation is not required for the data in the first table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:doe6.7.png|thumb|center|400px|Box-Cox power transformation plot for the data in the first table.]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Experiments with Several Factors - Factorial Experiments==&lt;br /&gt;
&lt;br /&gt;
Experiments with two or more factors are encountered frequently. The best way to carry out such experiments is by using  factorial experiments.  Factorial experiments are experiments in which all combinations of factors are investigated in each replicate of the experiment. Factorial experiments are the only means to completely and systematically study interactions between factors in addition to identifying significant factors.  One-factor-at-a-time experiments (where each factor is investigated separately by keeping all the remaining factors constant) do not reveal the interaction effects between the factors. Further, in one-factor-at-a-time experiments full randomization is not possible.&lt;br /&gt;
&lt;br /&gt;
To illustrate factorial experiments consider an experiment where the response is investigated for two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. Assume that the response is studied at two levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;{{A}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; representing the lower level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{A}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; representing the higher level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;. Similarly, let &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; represent the two levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; that are being investigated in this experiment. Since there are two factors with two levels, a total of &amp;lt;math&amp;gt;2\times 2=4\,\!&amp;lt;/math&amp;gt; combinations exist (&amp;lt;math&amp;gt;{{A}_{\text{low}}}\,\!&amp;lt;/math&amp;gt; - &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt;,    - &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{A}_{\text{high}}}\,\!&amp;lt;/math&amp;gt; - &amp;lt;math&amp;gt;{{B}_{\text{low}}}\,\!&amp;lt;/math&amp;gt;,    - &amp;lt;math&amp;gt;{{B}_{\text{high}}}\,\!&amp;lt;/math&amp;gt;). Thus, four runs are required for each replicate if a factorial experiment is to be carried out in this case. Assume that the response values for each of these four possible combinations are obtained as shown in the third table.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.3.png|thumb|center|400px|Two-factor factorial experiment.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.8.png|thumb|center|400px|Interaction plot for the data in the third table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Investigating Factor Effects===&lt;br /&gt;
&lt;br /&gt;
The effect of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response can be obtained by taking the difference between the average response when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is high and the average response when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is low. The change in the response due to a change in the level of a factor is called the main effect of the factor. The main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; as per the response values in the third table is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{45+55}{2}-\frac{25+35}{2} \\ &lt;br /&gt;
= &amp;amp; 50-30 \\ &lt;br /&gt;
= &amp;amp; 20  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, when &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from the lower level to the higher level, the response increases by 20 units. A plot of the response for the two levels of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at different levels of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is shown in the figure above. The plot shows that change in the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; leads to an increase in the response by 20 units regardless of the level of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. Therefore, no interaction exists in this case as indicated by the parallel lines on the plot. The main effect of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be obtained as: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
B= &amp;amp; Average\text{ }response\text{ }at\text{ }{{B}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{B}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{35+55}{2}-\frac{25+45}{2} \\ &lt;br /&gt;
= &amp;amp; 45-35 \\ &lt;br /&gt;
= &amp;amp; 10  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Investigating Interactions===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now assume that the response values for each of the four treatment combinations were obtained as shown in the fourth table. The main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; in this case is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}-Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{40+10}{2}-\frac{20+30}{2} \\ &lt;br /&gt;
= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.4.png|thumb|center|400px|Two factor factorial experiment.]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
It appears that &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; does not have an effect on the response. However, a plot of the response of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at different levels of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; shows that the response does change with the levels of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; but the effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response is dependent on the level of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (see the figure below). Therefore, an interaction between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; exists in this case (as indicated by the non-parallel lines of the figure). The interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be calculated as follows: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.9.png|thumb|center|400px|Interaction plot for the data in the fourth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
AB= &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{high}}}\text{-}{{B}_{\text{high}}}\text{ }and\text{ }{{A}_{\text{low}}}\text{-}{{B}_{\text{low}}}- \\ &lt;br /&gt;
 &amp;amp; Average\text{ }response\text{ }at\text{ }{{A}_{\text{low}}}\text{-}{{B}_{\text{high}}}\text{ }and\text{ }{{A}_{\text{high}}}\text{-}{{B}_{\text{low}}} \\ &lt;br /&gt;
= &amp;amp; \frac{10+20}{2}-\frac{40+30}{2} \\ &lt;br /&gt;
= &amp;amp; -20  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that in this case, if a one-factor-at-a-time experiment were used to investigate the effect of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; on the response, it would lead to incorrect conclusions. For example, if the response at factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was studied by holding &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; constant at its lower level, then the main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would be obtained as &amp;lt;math&amp;gt;40-20=20\,\!&amp;lt;/math&amp;gt;, indicating that the response increases by 20 units when the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from low to high. On the other hand, if the response at factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was studied by holding &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; constant at its higher level than the main effect of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would be obtained as &amp;lt;math&amp;gt;10-30=-20\,\!&amp;lt;/math&amp;gt;, indicating that the response decreases by 20 units when the level of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is changed from low to high.&lt;br /&gt;
&lt;br /&gt;
==Analysis of General Factorial Experiments==&lt;br /&gt;
&lt;br /&gt;
In DOE++, factorial experiments are referred to as &#039;&#039;factorial designs&#039;&#039;. The experiments explained in this section are referred to as g&#039;&#039;eneral factorial designs&#039;&#039;. This is done to distinguish these experiments from the other factorial designs supported by DOE++ (see the figure below). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.10.png|thumb|center|518px|Factorial experiments available in DOE++.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The other designs (such as the two level full factorial designs that are explained in [[Two_Level_Factorial_Experiments| Two Level Factorial Experiments]]) are special cases of these experiments in which factors are limited to a specified number of levels. The ANOVA model for the analysis of factorial experiments is formulated as shown next. Assume a factorial experiment in which the effect of two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, on the response is being investigated. Let there be &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{n}_{b}}\,\!&amp;lt;/math&amp;gt; levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The ANOVA model for this experiment can be stated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\tau }_{i}}+{{\delta }_{j}}+{{(\tau \delta )}_{ij}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean effect&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;i=1,2,...,{{n}_{a}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;j=1,2,...,{{n}_{b}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*&amp;lt;math&amp;gt;{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt; represents the random error terms (which are assumed to be normally distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*and the subscript &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; denotes the &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; replicates (&amp;lt;math&amp;gt;k=1,2,...,m\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represent deviations from the overall mean, the following constraints exist: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{j=1}{\overset{{{n}_{b}}}{\mathop \sum }}\,{{\delta }_{j}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{j=1}{\overset{{{n}_{b}}}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Hypothesis Tests in General Factorial Experiments===&lt;br /&gt;
These tests are used to check whether each of the factors investigated in the experiment is significant or not. For the previous example, with two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, and their interaction, &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;, the statements for the hypothesis tests can be formulated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \begin{matrix}&lt;br /&gt;
   1. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0\text{    (Main effect of }A\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\tau }_{i}}\ne 0\text{    for at least one }i \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   2. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\delta }_{1}}={{\delta }_{2}}=...={{\delta }_{{{n}_{b}}}}=0\text{    (Main effect of }B\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\delta }_{j}}\ne 0\text{    for at least one }j \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   3. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{12}}=...={{(\tau \delta )}_{{{n}_{a}}{{n}_{b}}}}=0\text{    (Interaction }AB\text{ is absent)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{(\tau \delta )}_{ij}}\ne 0\text{    for at least one }ij  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistics for the three tests are as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::1)&amp;lt;math&amp;gt;{(F_{0})}_{A} = \frac{MS_{A}}{MS_{E}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{A}\,\!&amp;lt;/math&amp;gt; is the mean square due to factor &amp;lt;math&amp;gt;{A}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{MS_E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::2)&amp;lt;math&amp;gt;{(F_{0})_{B}} = \frac{MS_B}{MS_E}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{B}\,\!&amp;lt;/math&amp;gt; is the mean square due to factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;MS_{E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::3)&amp;lt;math&amp;gt;{(F_{0})_{AB}} = \frac{MS_{AB}}{MS_{E}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;MS_{AB}\,\!&amp;lt;/math&amp;gt; is the mean square due to interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;MS_{E}\,\!&amp;lt;/math&amp;gt; is the error mean square.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The tests are identical to the partial &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; test explained in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]]. The sum of squares for these tests (to obtain the mean squares) are calculated by splitting the model sum of squares into the extra sum of squares due to each factor. The extra sum of squares calculated for each of the factors may either be partial or sequential.  For the present example, if the extra sum of squares used is sequential, then the model sum of squares can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{TR}}=S{{S}_{A}}+S{{S}_{B}}+S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; represents the model sum of squares, &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S{{S}_{B}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to factor    and &amp;lt;math&amp;gt;S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; represents the sequential sum of squares due to the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
The mean squares are obtained by dividing the sum of squares by the associated degrees of freedom. Once the mean squares are known the test statistics can be calculated. For example, the test statistic to test the significance of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (or the hypothesis &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt;) can then be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{F}_{0}})}_{A}}= &amp;amp; \frac{M{{S}_{A}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{A}}/dof(S{{S}_{A}})}{S{{S}_{E}}/dof(S{{S}_{E}})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Similarly the test statistic to test significance of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be respectively obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{F}_{0}})}_{B}}= &amp;amp; \frac{M{{S}_{B}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{B}}/dof(S{{S}_{B}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
{{({{F}_{0}})}_{AB}}= &amp;amp; \frac{M{{S}_{AB}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{AB}}/dof(S{{S}_{AB}})}{S{{S}_{E}}/dof(S{{S}_{E}})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is recommended to conduct the test for interactions before conducting the test for the main effects. This is because, if an interaction is present, then the main effect of the factor depends on the level of the other factors and looking at the main effect is of little value. However, if the interaction is absent then the main effects become important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
Consider an experiment to investigate the effect of speed and type of fuel additive used on the mileage of a sports utility vehicle. Three speeds and two types of fuel additives are investigated. Each of the treatment combinations are replicated three times. The mileage values observed are displayed in the fifth table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.5.png|thumb|center|400px|Mileage data for different speeds and fuel additive types.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The experimental design for the data in the fifth table is shown in the figure below. In the figure, the factor Speed is represented as factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and the factor Fuel Additive is represented as factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The experimenter would like to investigate if speed, fuel additive or the interaction between speed and fuel additive affects the mileage of the sports utility vehicle. In other words, the following hypotheses need to be tested:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \begin{matrix}&lt;br /&gt;
   1. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}={{\tau }_{3}}=0\text{   (No main effect of factor }A\text{, speed)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\tau }_{i}}\ne 0\text{    for at least one }i \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   2. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{\delta }_{1}}={{\delta }_{2}}={{\delta }_{3}}=0\text{    (No main effect of factor }B\text{, fuel additive)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{\delta }_{j}}\ne 0\text{    for at least one }j \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   3. &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{12}}=...={{(\tau \delta )}_{33}}=0\text{    (No interaction }AB\text{)} \\ &lt;br /&gt;
 &amp;amp; \begin{matrix}&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}{{H}_{1}}:{{(\tau \delta )}_{ij}}\ne 0\text{    for at least one }ij  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistics for the three tests are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::1.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{A}}=\frac{M{{S}_{A}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{A}}\,\!&amp;lt;/math&amp;gt; is the mean square for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::2.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{B}}=\frac{M{{S}_{B}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{B}}\,\!&amp;lt;/math&amp;gt; is the mean square for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::3.&amp;lt;math&amp;gt;{{({{F}_{0}})}_{AB}}=\frac{M{{S}_{AB}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
:::where &amp;lt;math&amp;gt;M{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; is the mean square for interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.11.png|thumb|center|639px|Experimental design for the data in the fifth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ANOVA model for this experiment can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\tau }_{i}}+{{\delta }_{j}}+{{(\tau \delta )}_{ij}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th treatment of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed) with &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; =1, 2, 3; &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th treatment of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (fuel additive) with &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; =1, 2; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect. In order to calculate the test statistics, it is convenient to express the ANOVA model of the equation given above in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;. This can be done as explained next.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; represent deviations from the overall mean, the following constraints exist.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or    }{{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\tau }_{i}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{1}}={{\tau }_{2}}=0\,\!&amp;lt;/math&amp;gt;.) DOE++ displays only the independent effects because only these effects are important to the analysis. The independent effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1] and A[2] respectively because these are the effects associated with factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed).&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{j=1}{\overset{2}{\mathop \sum }}\,{{\delta }_{j}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or    }{{\delta }_{1}}+{{\delta }_{2}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only one of the &amp;lt;math&amp;gt;{{\delta }_{j}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is independent, &amp;lt;math&amp;gt;{{\delta }_{2}}=-{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effect as &amp;lt;math&amp;gt;{{H}_{0}}:{{\delta }_{1}}=0\,\!&amp;lt;/math&amp;gt;.) The independent effect &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is displayed as B:B in DOE++.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{and   }\underset{j=1}{\overset{2}{\mathop \sum }}\,{{(\tau \delta )}_{ij}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{or   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{31}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{12}}+{{(\tau \delta )}_{22}}+{{(\tau \delta )}_{32}}= &amp;amp; 0 \\ &lt;br /&gt;
\text{and   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{12}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{22}}= &amp;amp; 0 \\ &lt;br /&gt;
{{(\tau \delta )}_{31}}+{{(\tau \delta )}_{32}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last five equations given above represent four constraints, as only four of these five equations are independent. Therefore, only two out of the six &amp;lt;math&amp;gt;{{(\tau \delta )}_{ij}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are independent, the other four effects can be expressed in terms of these effects. (The null hypothesis to test the significance of interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{(\tau \delta )}_{11}}={{(\tau \delta )}_{21}}=0\,\!&amp;lt;/math&amp;gt;.) The effects &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are displayed as A[1]B and A[2]B respectively in DOE++.&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be obtained using indicator variables, similar to the case of the single factor experiment in [[ANOVA_for_Designed_Experiments#Fitting_ANOVA_Models|Fitting ANOVA Models]]. Since factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; has three levels, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required which need to be coded as shown next:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ &lt;br /&gt;
\text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has two levels and can be represented using one indicator variable, &amp;lt;math&amp;gt;{{x}_{3}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{Treatment Effect }{{\delta }_{1}}: &amp;amp; {{x}_{3}}=1 \\ &lt;br /&gt;
\text{Treatment Effect }{{\delta }_{2}}: &amp;amp; {{x}_{3}}=-1  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; interaction will be represented by all possible terms resulting from the product of the indicator variables representing factors &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. There are two such terms here - &amp;lt;math&amp;gt;{{x}_{1}}{{x}_{3}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}{{x}_{3}}\,\!&amp;lt;/math&amp;gt;. The regression version of the ANOVA model can finally be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{\tau }_{1}}\cdot {{x}_{1}}+{{\tau }_{2}}\cdot {{x}_{2}}+{{\delta }_{1}}\cdot {{x}_{3}}+{{(\tau \delta )}_{11}}\cdot {{x}_{1}}{{x}_{3}}+{{(\tau \delta )}_{21}}\cdot {{x}_{2}}{{x}_{3}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In matrix notation this model can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{111}}  \\&lt;br /&gt;
   {{Y}_{211}}  \\&lt;br /&gt;
   {{Y}_{311}}  \\&lt;br /&gt;
   {{Y}_{121}}  \\&lt;br /&gt;
   {{Y}_{221}}  \\&lt;br /&gt;
   {{Y}_{321}}  \\&lt;br /&gt;
   {{Y}_{112}}  \\&lt;br /&gt;
   {{Y}_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{323}}  \\&lt;br /&gt;
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; -1 &amp;amp; 0 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
   {{\delta }_{1}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{11}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{21}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{111}}  \\&lt;br /&gt;
   {{\epsilon }_{211}}  \\&lt;br /&gt;
   {{\epsilon }_{311}}  \\&lt;br /&gt;
   {{\epsilon }_{121}}  \\&lt;br /&gt;
   {{\epsilon }_{221}}  \\&lt;br /&gt;
   {{\epsilon }_{321}}  \\&lt;br /&gt;
   {{\epsilon }_{112}}  \\&lt;br /&gt;
   {{\epsilon }_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{323}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The vector &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; can be substituted with the response values from the fifth table to get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   {{Y}_{111}}  \\&lt;br /&gt;
   {{Y}_{211}}  \\&lt;br /&gt;
   {{Y}_{311}}  \\&lt;br /&gt;
   {{Y}_{121}}  \\&lt;br /&gt;
   {{Y}_{221}}  \\&lt;br /&gt;
   {{Y}_{321}}  \\&lt;br /&gt;
   {{Y}_{112}}  \\&lt;br /&gt;
   {{Y}_{212}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{Y}_{323}}  \\&lt;br /&gt;
\end{matrix} \right]=\left[ \begin{matrix}&lt;br /&gt;
   17.3  \\&lt;br /&gt;
   18.9  \\&lt;br /&gt;
   17.1  \\&lt;br /&gt;
   18.7  \\&lt;br /&gt;
   19.1  \\&lt;br /&gt;
   18.8  \\&lt;br /&gt;
   17.8  \\&lt;br /&gt;
   18.2  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18.3  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;, the sum of squares for the ANOVA model and the extra sum of squares for each of the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Sum of Squares for the Model====&lt;br /&gt;
&lt;br /&gt;
The model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, for the regression version of the ANOVA model can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 9.7311  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones. Since five effect terms (&amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;) are used in the model, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; is five (&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=5\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The total sum of squares, &amp;lt;math&amp;gt;S{{S}_{T}}\,\!&amp;lt;/math&amp;gt;, can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 10.7178  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are 18 observed response values, the number of degrees of freedom associated with the total sum of squares is 17 (&amp;lt;math&amp;gt;dof(S{{S}_{T}})=17\,\!&amp;lt;/math&amp;gt;). The error sum of squares can now be obtained:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 10.7178-9.7311 \\ &lt;br /&gt;
= &amp;amp; 0.9867  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are three replicates of the full factorial experiment, all of the error sum of squares is pure error. (This can also be seen from the preceding figure, where each treatment combination of the full factorial design is repeated three times.) The number of degrees of freedom associated with the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 17-5 \\ &lt;br /&gt;
= &amp;amp; 12  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Extra Sum of Squares for the Factors====&lt;br /&gt;
&lt;br /&gt;
The sequential sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be calculated as: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}})-S{{S}_{TR}}(\mu ) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}={{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}{{(X_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}^{\prime }{{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}})}^{-1}}X_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}^{\prime }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the matrix containing only the first three columns of the &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrix. Thus:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-0 \\ &lt;br /&gt;
= &amp;amp; 4.5811-0 \\ &lt;br /&gt;
= &amp;amp; 4.5811  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent effects (&amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;) for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, the degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; are two (&amp;lt;math&amp;gt;dof(S{{S}_{A}})=2\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Similarly, the sum of squares for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{B}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}})-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}}) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}}}-(\frac{1}{18})J]y-{{y}^{\prime }}[{{H}_{\mu ,{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 9.4900-4.5811 \\ &lt;br /&gt;
= &amp;amp; 4.9089  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there is one independent effect, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{B}}\,\!&amp;lt;/math&amp;gt; is one (&amp;lt;math&amp;gt;dof(S{{S}_{B}})=1\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{AB}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}},{{(\tau \delta )}_{11}},{{(\tau \delta )}_{21}})-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}) \\ &lt;br /&gt;
= &amp;amp; S{{S}_{TR}}-S{{S}_{TR}}(\mu ,{{\tau }_{1}},{{\tau }_{2}},{{\delta }_{1}}) \\ &lt;br /&gt;
= &amp;amp; 9.7311-9.4900 \\ &lt;br /&gt;
= &amp;amp; 0.2411  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent interaction effects, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{AB}}\,\!&amp;lt;/math&amp;gt; is two (&amp;lt;math&amp;gt;dof(S{{S}_{AB}})=2\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Calculation of the Test Statistics====&lt;br /&gt;
&lt;br /&gt;
Knowing the sum of squares, the test statistic for each of the factors can be calculated. Analyzing the interaction first, the test statistic for interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{AB}}= &amp;amp; \frac{M{{S}_{AB}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{0.2411/2}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 1.47  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic, based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator, is: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{AB}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.7307 \\ &lt;br /&gt;
= &amp;amp; 0.2693  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;gt; 0.1, we fail to reject &amp;lt;math&amp;gt;{{H}_{0}}:{{(\tau \delta )}_{ij}}=0\,\!&amp;lt;/math&amp;gt; and conclude that the interaction between speed and fuel additive does not significantly affect the mileage of the sports utility vehicle. DOE++ displays this result in the ANOVA table, as shown in the following figure. In the absence of the interaction, the analysis of main effects becomes important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test statistic for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{A}}= &amp;amp; \frac{M{{S}_{A}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{A}}/dof(S{{S}_{A}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{4.5811/2}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 27.86  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{A}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.99997 \\ &lt;br /&gt;
= &amp;amp; 0.00003  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}:{{\tau }_{i}}=0\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (or speed) has a significant effect on the mileage.&lt;br /&gt;
&lt;br /&gt;
The test statistic for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{B}}= &amp;amp; \frac{M{{S}_{B}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{4.9089/1}{0.9867/12} \\ &lt;br /&gt;
= &amp;amp; 59.7  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{B}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.999995 \\ &lt;br /&gt;
= &amp;amp; 0.000005  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;lt; 0.1, &amp;lt;math&amp;gt;{{H}_{0}}:{{\delta }_{j}}=0\,\!&amp;lt;/math&amp;gt; is rejected and it is concluded that factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (or fuel additive type) has a significant effect on the mileage.&lt;br /&gt;
Therefore, it can be concluded that speed and fuel additive type affect the mileage of the vehicle significantly. The results are displayed in the ANOVA table of the following figure. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.12.png|thumb|center|645px|Analysis results for the experiment in the fifth table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of Effect Coefficients====&lt;br /&gt;
&lt;br /&gt;
Results for the effect coefficients of the model of the regression version of the ANOVA model are displayed in the Regression Information table in the following figure. Calculations of the results in this table are discussed next. The effect coefficients can be calculated as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\hat{\beta }= &amp;amp; {{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y \\ &lt;br /&gt;
= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   18.2889  \\&lt;br /&gt;
   -0.2056  \\&lt;br /&gt;
   0.6944  \\&lt;br /&gt;
   -0.5222  \\&lt;br /&gt;
   0.0056  \\&lt;br /&gt;
   0.1389  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\hat{\mu }=18.2889\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}=-0.2056\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\hat{\tau }}_{2}}=0.6944\,\!&amp;lt;/math&amp;gt; etc. As mentioned previously, these coefficients are displayed as Intercept, A[1] and A[2] respectively depending on the name of the factor used in the experimental design. The standard error for each of these estimates is obtained using the diagonal elements of the variance-covariance matrix &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
C= &amp;amp; {{{\hat{\sigma }}}^{2}}{{({{X}^{\prime }}X)}^{-1}} \\ &lt;br /&gt;
= &amp;amp; M{{S}_{E}}\cdot {{({{X}^{\prime }}X)}^{-1}} \\ &lt;br /&gt;
= &amp;amp; \left[ \begin{matrix}&lt;br /&gt;
   0.0046 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0.0091 &amp;amp; -0.0046 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; -0.0046 &amp;amp; 0.0091 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0.0046 &amp;amp; 0 &amp;amp; 0  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0.0091 &amp;amp; -0.0046  \\&lt;br /&gt;
   0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; -0.0046 &amp;amp; 0.0091  \\&lt;br /&gt;
\end{matrix} \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, the standard error for &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
se({{{\hat{\tau }}}_{1}})= &amp;amp; \sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; \sqrt{0.0091} \\ &lt;br /&gt;
= &amp;amp; 0.0956  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then the &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; statistic for &amp;lt;math&amp;gt;{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{t}_{0}}= &amp;amp; \frac{{{{\hat{\tau }}}_{1}}}{se({{{\hat{\tau }}}_{1}})} \\ &lt;br /&gt;
= &amp;amp; \frac{-0.2056}{0.0956} \\ &lt;br /&gt;
= &amp;amp; -2.1506  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic is:&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Confidence intervals on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; can also be calculated. The 90% limits on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
= &amp;amp; {{{\hat{\tau }}}_{1}}\pm {{t}_{\alpha /2,n-(k+1)}}\sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; {{\tau }_{1}}\pm {{t}_{0.05,12}}\sqrt{{{C}_{22}}} \\ &lt;br /&gt;
= &amp;amp; -0.2056\pm 0.1704  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Thus, the 90% limits on &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;-0.3760\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-0.0352\,\!&amp;lt;/math&amp;gt; respectively. Results for other coefficients are obtained in a similar manner.&lt;br /&gt;
&lt;br /&gt;
===Least Squares Means===&lt;br /&gt;
The estimated mean response corresponding to the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of any factor is obtained using the adjusted estimated mean which is also called the least squares mean. For example, the mean response corresponding to the first level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\mu +{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;. An estimate of this is &amp;lt;math&amp;gt;\hat{\mu }+{{\hat{\tau }}_{1}}\,\!&amp;lt;/math&amp;gt; or (&amp;lt;math&amp;gt;18.2889+(-0.2056)=18.0833\,\!&amp;lt;/math&amp;gt;). Similarly, the estimated response at the third level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\hat{\mu }+{{\hat{\tau }}_{3}}\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\hat{\mu }+(-{{\hat{\tau }}_{1}}-{{\hat{\tau }}_{2}})\,\!&amp;lt;/math&amp;gt; or (&amp;lt;math&amp;gt;18.2889+(0.2056-0.6944)=17.8001\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Residual Analysis===&lt;br /&gt;
As in the case of single factor experiments, plots of residuals can also be used to check for model adequacy in factorial experiments. Box-Cox transformations are also available in DOE++ for factorial experiments.&lt;br /&gt;
&lt;br /&gt;
==Factorial Experiments with a Single Replicate==&lt;br /&gt;
&lt;br /&gt;
If a factorial experiment is run only for a single replicate then it is not possible to test hypotheses about the main effects and interactions as the error sum of squares cannot be obtained.  This is because the number of observations in a single replicate equals the number of terms in the ANOVA model. Hence the model fits the data perfectly and no degrees of freedom are available to obtain the error sum of squares. For example, if the two factor experiment to study the effect of speed and fuel additive type on mileage was run only as a single replicate there would be only six response values. The regression version of the ANOVA model has six terms and therefore will fit the six response values perfectly. The error sum of squares, &amp;lt;math&amp;gt;S{{S}_{E}}\,\!&amp;lt;/math&amp;gt;, for this case will be equal to zero. In some single replicate factorial experiments it is possible to assume that the interaction effects are negligible. In this case, the interaction mean square can be used as error mean square, &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt;, to test hypotheses about the main effects. However, such assumptions are not applicable in all cases and should be used carefully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Blocking==&lt;br /&gt;
&lt;br /&gt;
Many times a factorial experiment requires so many runs that not all of them can be completed under homogeneous conditions. This may lead to inclusion of the effects of &#039;&#039;nuisance factors&#039;&#039; into the investigation. Nuisance factors are factors that have an effect on the response but are not of primary interest to the investigator. For example, two replicates of a two factor factorial experiment require eight runs. If four runs require the duration of one day to be completed, then the total experiment will require two days to be completed. The difference in the conditions on the two days may introduce effects on the response that are not the result of the two factors being investigated. Therefore, the day is a nuisance factor for this experiment.&lt;br /&gt;
Nuisance factors can be accounted for using &#039;&#039;blocking&#039;&#039;. In blocking, experimental runs are separated based on levels of the nuisance factor. For the case of the two factor factorial experiment (where the day is a nuisance factor), separation can be made into two groups or &#039;&#039;blocks&#039;&#039;: runs that are carried out on the first day belong to block 1, and runs that are carried out on the second day belong to block 2. Thus, within each block conditions are the same with respect to the nuisance factor. As a result, each block investigates the effects of the factors of interest, while the difference in the blocks measures the effect of the nuisance factor. &lt;br /&gt;
For the example of the two factor factorial experiment, a possible assignment of runs to the blocks could be as follows: one replicate of the experiment is assigned to block 1 and the second replicate is assigned to block 2 (now each block contains all possible treatment combinations). Within each block, runs are subjected to randomization (i.e., randomization is now restricted to the runs within a block). Such a design, where each block contains one complete replicate and the treatments within a block are subjected to randomization, is called &#039;&#039;randomized complete block design&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
In summary, blocking should always be used to account for the effects of nuisance factors if it is not possible to hold the nuisance factor at a constant level through all of the experimental runs. Randomization should be used within each block to counter the effects of any unknown variability that may still be present.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&lt;br /&gt;
Consider the experiment of the fifth table where the mileage of a sports utility vehicle was investigated for the effects of speed and fuel additive type. Now assume that the three replicates for this experiment were carried out on three different vehicles. To ensure that the variation from one vehicle to another does not have an effect on the analysis, each vehicle is considered as one block. See the experiment design in the following figure.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.13.png|thumb|center|643px|Randomized complete block design for the experiment in the fifth table using three blocks.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purpose of the analysis, the block is considered as a main effect except that it is assumed that interactions between the block and the other main effects do not exist. Therefore, there is one block main effect (having three levels - block 1, block 2 and block 3), two main effects (speed -having three levels; and fuel additive type - having two levels) and one interaction effect (speed-fuel additive interaction) for this experiment. Let &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; represent the block effects. The hypothesis test on the block main effect checks if there is a significant variation from one vehicle to the other. The statements for the hypothesis test are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{H}_{0}}: &amp;amp; {{\zeta }_{1}}={{\zeta }_{2}}={{\zeta }_{3}}=0\text{   (no main effect of block)} \\ &lt;br /&gt;
 &amp;amp; {{H}_{1}}: &amp;amp; {{\zeta }_{i}}\ne 0\text{    for at least one }i  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The test statistic for this test is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{0}}=\frac{M{{S}_{Block}}}{M{{S}_{E}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;M{{S}_{Block}}\,\!&amp;lt;/math&amp;gt; represents the mean square for the block main effect and &amp;lt;math&amp;gt;M{{S}_{E}}\,\!&amp;lt;/math&amp;gt; is the error mean square. The hypothesis statements and test statistics to test the significance of factors &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (speed), &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (fuel additive) and the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; (speed-fuel additive interaction) can be obtained as explained in the [[ANOVA_for_Designed_Experiments#Example_2| example]]. The ANOVA model for this example can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{Y}_{ijk}}=\mu +{{\zeta }_{i}}+{{\tau }_{j}}+{{\delta }_{k}}+{{(\tau \delta )}_{jk}}+{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; represents the overall mean effect&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of the block (&amp;lt;math&amp;gt;i=1,2,3\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;j=1,2,3\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; is the effect of the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;k=1,2\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
*&amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; represents the interaction effect between &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
*and &amp;lt;math&amp;gt;{{\epsilon }_{ijk}}\,\!&amp;lt;/math&amp;gt; represents the random error terms (which are assumed to be normally distributed with a mean of zero and variance of &amp;lt;math&amp;gt;{{\sigma }^{2}}\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to calculate the test statistics, it is convenient to express the ANOVA model of the equation given above in the form &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;. This can be done as explained next.&lt;br /&gt;
&lt;br /&gt;
====Expression of the ANOVA Model as &amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
Since the effects &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; are defined as deviations from the overall mean, the following constraints exist.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{i=1}{\overset{3}{\mathop \sum }}\,{{\zeta }_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\zeta }_{1}}+{{\zeta }_{2}}+{{\zeta }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\zeta }_{i}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\zeta }_{3}}=-({{\zeta }_{1}}+{{\zeta }_{2}})\,\!&amp;lt;/math&amp;gt;. (The null hypothesis to test the significance of the blocks can be rewritten using only the independent effects as &amp;lt;math&amp;gt;{{H}_{0}}:{{\zeta }_{1}}={{\zeta }_{2}}=0\,\!&amp;lt;/math&amp;gt;.) In DOE++, the independent block effects, &amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as Block[1] and Block[2], respectively.&lt;br /&gt;
&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{j=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{j}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only two of the &amp;lt;math&amp;gt;{{\tau }_{j}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt; are independent, &amp;lt;math&amp;gt;{{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!&amp;lt;/math&amp;gt;. The independent effects, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1] and A[2], respectively.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{k=1}{\overset{2}{\mathop \sum }}\,{{\delta }_{k}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or    }{{\delta }_{1}}+{{\delta }_{2}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, only one of the &amp;lt;math&amp;gt;{{\delta }_{k}}\,\!&amp;lt;/math&amp;gt; effects is independent. Assuming that &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt; is independent, &amp;lt;math&amp;gt;{{\delta }_{2}}=-{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;. The independent effect, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, is displayed as B:B.&lt;br /&gt;
Constraints on &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{j=1}{\overset{3}{\mathop \sum }}\,{{(\tau \delta )}_{jk}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{and   }\underset{k=1}{\overset{2}{\mathop \sum }}\,{{(\tau \delta )}_{jk}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{or   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{31}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{12}}+{{(\tau \delta )}_{22}}+{{(\tau \delta )}_{32}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \text{and   }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{12}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{21}}+{{(\tau \delta )}_{22}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; {{(\tau \delta )}_{31}}+{{(\tau \delta )}_{32}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last five equations given above represent four constraints as only four of the five equations are independent. Therefore, only two out of the six &amp;lt;math&amp;gt;{{(\tau \delta )}_{jk}}\,\!&amp;lt;/math&amp;gt; effects are independent. Assuming that &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt; are independent, we can express the other four effects in terms of these effects. The independent effects, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;, are displayed as A[1]B and A[2]B, respectively.&lt;br /&gt;
&lt;br /&gt;
The regression version of the ANOVA model can be obtained using indicator variables. Since the block has three levels, two indicator variables, &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, are required, which need to be coded as shown next: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Block 1}: &amp;amp; {{x}_{1}}=1,\text{   }{{x}_{2}}=0\text{ } \\ &lt;br /&gt;
 &amp;amp; \text{Block 2}: &amp;amp; {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{         } \\ &lt;br /&gt;
 &amp;amp; \text{Block 3}: &amp;amp; {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{   }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; has three levels and two indicator variables, &amp;lt;math&amp;gt;{{x}_{3}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{4}}\,\!&amp;lt;/math&amp;gt;, are required:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Treatment Effect }{{\tau }_{1}}: &amp;amp; {{x}_{3}}=1,\text{   }{{x}_{4}}=0 \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\tau }_{2}}: &amp;amp; {{x}_{3}}=0,\text{   }{{x}_{4}}=1\text{           } \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\tau }_{3}}: &amp;amp; {{x}_{3}}=-1,\text{   }{{x}_{4}}=-1\text{     }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has two levels and can be represented using one indicator variable, &amp;lt;math&amp;gt;{{x}_{5}}\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \text{Treatment Effect }{{\delta }_{1}}: &amp;amp; {{x}_{5}}=1 \\ &lt;br /&gt;
 &amp;amp; \text{Treatment Effect }{{\delta }_{2}}: &amp;amp; {{x}_{5}}=-1  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; interaction will be represented by &amp;lt;math&amp;gt;{{x}_{3}}{{x}_{5}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{4}}{{x}_{5}}\,\!&amp;lt;/math&amp;gt;. The regression version of the ANOVA model can finally be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y=\mu +{{\zeta }_{1}}\cdot {{x}_{1}}+{{\zeta }_{2}}\cdot {{x}_{2}}+{{\tau }_{1}}\cdot {{x}_{3}}+{{\tau }_{2}}\cdot {{x}_{4}}+{{\delta }_{1}}\cdot {{x}_{5}}+{{(\tau \delta )}_{11}}\cdot {{x}_{3}}{{x}_{5}}+{{(\tau \delta )}_{21}}\cdot {{x}_{4}}{{x}_{5}}+\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In matrix notation this model can be expressed as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=X\beta +\epsilon \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:or:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   17.3  \\&lt;br /&gt;
   18.9  \\&lt;br /&gt;
   17.1  \\&lt;br /&gt;
   18.7  \\&lt;br /&gt;
   19.1  \\&lt;br /&gt;
   18.8  \\&lt;br /&gt;
   17.8  \\&lt;br /&gt;
   18.2  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   18.3  \\&lt;br /&gt;
\end{matrix} \right]=\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; -1 &amp;amp; 0 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 0 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0  \\&lt;br /&gt;
   1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; . &amp;amp; .  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
\end{matrix} \right]\left[ \begin{matrix}&lt;br /&gt;
   \mu   \\&lt;br /&gt;
   {{\zeta }_{1}}  \\&lt;br /&gt;
   {{\zeta }_{2}}  \\&lt;br /&gt;
   {{\tau }_{1}}  \\&lt;br /&gt;
   {{\tau }_{2}}  \\&lt;br /&gt;
   {{\delta }_{1}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{11}}  \\&lt;br /&gt;
   {{(\tau \delta )}_{21}}  \\&lt;br /&gt;
\end{matrix} \right]+\left[ \begin{matrix}&lt;br /&gt;
   {{\epsilon }_{111}}  \\&lt;br /&gt;
   {{\epsilon }_{121}}  \\&lt;br /&gt;
   {{\epsilon }_{131}}  \\&lt;br /&gt;
   {{\epsilon }_{112}}  \\&lt;br /&gt;
   {{\epsilon }_{122}}  \\&lt;br /&gt;
   {{\epsilon }_{132}}  \\&lt;br /&gt;
   {{\epsilon }_{211}}  \\&lt;br /&gt;
   {{\epsilon }_{221}}  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   .  \\&lt;br /&gt;
   {{\epsilon }_{332}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Knowing &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;, the sum of squares for the ANOVA model and the extra sum of squares for each of the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Sum of Squares for the Model====&lt;br /&gt;
&lt;br /&gt;
The model sum of squares, &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt;, for the ANOVA model of this example can be obtained as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{TR}}= &amp;amp; {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 9.9256  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since seven effect terms (&amp;lt;math&amp;gt;{{\zeta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\tau }_{2}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\delta }_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{(\tau \delta )}_{11}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{(\tau \delta )}_{21}}\,\!&amp;lt;/math&amp;gt;) are used in the model the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{TR}}\,\!&amp;lt;/math&amp;gt; is seven (&amp;lt;math&amp;gt;dof(S{{S}_{TR}})=7\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The total sum of squares can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{T}}= &amp;amp; {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 10.7178  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are 18 observed response values, the number of degrees of freedom associated with the total sum of squares is 17 (&amp;lt;math&amp;gt;dof(S{{S}_{T}})=17\,\!&amp;lt;/math&amp;gt;). The error sum of squares can now be obtained:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{E}}= &amp;amp; S{{S}_{T}}-S{{S}_{TR}} \\ &lt;br /&gt;
= &amp;amp; 10.7178-9.9256 \\ &lt;br /&gt;
= &amp;amp; 0.7922  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The number of degrees of freedom associated with the error sum of squares is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
dof(S{{S}_{E}})= &amp;amp; dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ &lt;br /&gt;
= &amp;amp; 17-7 \\ &lt;br /&gt;
= &amp;amp; 10  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are no true replicates of the treatments (as can be seen from the design of the previous figure, where all of the treatments are seen to be run just once), all of the error sum of squares is the sum of squares due to lack of fit. The lack of fit arises because the model used is not a full model since it is assumed that there are no interactions between blocks and other effects.&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Extra Sum of Squares for the Factors====&lt;br /&gt;
&lt;br /&gt;
The sequential sum of squares for the blocks can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{Block}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}})-S{{S}_{TR}}(\mu ) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y-0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones, &amp;lt;math&amp;gt;{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the hat matrix, which is calculated using &amp;lt;math&amp;gt;{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}={{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}{{(X_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}^{\prime }{{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}})}^{-1}}X_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}^{\prime }\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;{{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}\,\!&amp;lt;/math&amp;gt; is the matrix containing only the first three columns of the &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrix. Thus&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{Block}}= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y-0 \\ &lt;br /&gt;
= &amp;amp; 0.1944-0 \\ &lt;br /&gt;
= &amp;amp; 0.1944  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since there are two independent block effects,    and &amp;lt;math&amp;gt;{{\zeta }_{2}}\,\!&amp;lt;/math&amp;gt;, the number of degrees of freedom associated with &amp;lt;math&amp;gt;S{{S}_{Blocks}}\,\!&amp;lt;/math&amp;gt; is two (&amp;lt;math&amp;gt;dof(S{{S}_{Blocks}})=2\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Similarly, the sequential sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
S{{S}_{A}}= &amp;amp; S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}},{{\tau }_{1}},{{\tau }_{2}})-S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}}) \\ &lt;br /&gt;
= &amp;amp; {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}},{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-{{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y \\ &lt;br /&gt;
= &amp;amp; 4.7756-0.1944 \\ &lt;br /&gt;
= &amp;amp; 4.5812  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sequential sum of squares for the other effects are obtained as &amp;lt;math&amp;gt;S{{S}_{B}}=4.9089\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S{{S}_{AB}}=0.2411\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Calculation of the Test Statistics====&lt;br /&gt;
&lt;br /&gt;
Knowing the sum of squares, the test statistics for each of the factors can be calculated. For example, the test statistic for the main effect of the blocks is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{({{f}_{0}})}_{Block}}= &amp;amp; \frac{M{{S}_{Block}}}{M{{S}_{E}}} \\ &lt;br /&gt;
= &amp;amp; \frac{S{{S}_{Block}}/dof(S{{S}_{Blocks}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ &lt;br /&gt;
= &amp;amp; \frac{0.1944/2}{0.7922/10} \\ &lt;br /&gt;
= &amp;amp; 1.227  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value corresponding to this statistic based on the &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; distribution with 2 degrees of freedom in the numerator and 10 degrees of freedom in the denominator is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
p\text{ }value= &amp;amp; 1-P(F\le {{({{f}_{0}})}_{Block}}) \\ &lt;br /&gt;
= &amp;amp; 1-0.6663 \\ &lt;br /&gt;
= &amp;amp; 0.3337  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming that the desired significance level is 0.1, since &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; value &amp;gt; 0.1, we fail to reject &amp;lt;math&amp;gt;{{H}_{0}}:{{\zeta }_{i}}=0\,\!&amp;lt;/math&amp;gt; and conclude that there is no significant variation in the mileage from one vehicle to the other. Statistics to test the significance of other factors can be calculated in a similar manner. The complete analysis results obtained from DOE++ for this experiment are presented in the following figure.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.14.png|thumb|center|644px|Analysis results for the experiment in the [[ANOVA_for_Designed_Experiments#Example_3| example]].]]&lt;br /&gt;
&lt;br /&gt;
==Use of Regression to Calculate Sum of Squares==&lt;br /&gt;
&lt;br /&gt;
This section explains the reason behind the use of regression in DOE++ in all calculations related to the sum of squares. A number of textbooks present the method of direct summation to calculate the sum of squares. But this method is only applicable for balanced designs and may give incorrect results for unbalanced designs. For example, the sum of squares for factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; in a balanced factorial experiment with two factors, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, is given as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{A}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,{{n}_{b}}n{{({{{\bar{y}}}_{i..}}-{{{\bar{y}}}_{...}})}^{2}} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{b}}n}-\frac{y_{...}^{2}}{{{n}_{a}}{{n}_{b}}n}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{n}_{a}}\,\!&amp;lt;/math&amp;gt; represents the levels of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{n}_{b}}\,\!&amp;lt;/math&amp;gt; represents the levels of factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; represents the number of samples for each combination of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;. The term &amp;lt;math&amp;gt;{{\bar{y}}_{i..}}\,\!&amp;lt;/math&amp;gt; is the mean value for the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{y}_{i..}}\,\!&amp;lt;/math&amp;gt; is the sum of all observations at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{y}_{...}}\,\!&amp;lt;/math&amp;gt; is the sum of all observations.&lt;br /&gt;
&lt;br /&gt;
The analogous term to calculate &amp;lt;math&amp;gt;S{{S}_{A}}\,\!&amp;lt;/math&amp;gt; in the case of an unbalanced design is given as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{A}}=\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\frac{y_{...}^{2}}{{{n}_{..}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{n}_{i.}}\,\!&amp;lt;/math&amp;gt; is the number of observations at the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th level of factor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{n}_{..}}\,\!&amp;lt;/math&amp;gt; is the total number of observations. Similarly, to calculate the sum of squares for factor &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;, the formulas are given as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{B}}= &amp;amp; \underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}-\frac{y_{...}^{2}}{{{n}_{..}}} \\ &lt;br /&gt;
 &amp;amp; S{{S}_{AB}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{ij.}^{2}}{{{n}_{ij}}}-\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}+\frac{y_{...}^{2}}{{{n}_{..}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Applying these relations to the unbalanced data of the last table, the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{AB}}= &amp;amp; \underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{ij.}^{2}}{{{n}_{ij}}}-\underset{i=1}{\overset{{{n}_{a}}}{\mathop{\sum }}}\,\frac{y_{i..}^{2}}{{{n}_{i.}}}-\underset{j=1}{\overset{{{n}_{b}}}{\mathop{\sum }}}\,\frac{y_{.j.}^{2}}{{{n}_{.j}}}+\frac{y_{...}^{2}}{{{n}_{..}}} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; \left( {{6}^{2}}+{{4}^{2}}+\frac{{{(42+6)}^{2}}}{2}+{{12}^{2}} \right)-\left( \frac{{{10}^{2}}}{2}+\frac{{{60}^{2}}}{3} \right) \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; -\left( \frac{{{54}^{2}}}{3}+\frac{{{16}^{2}}}{2} \right)+\frac{{{70}^{2}}}{5} \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; -22  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
which is obviously incorrect since the sum of squares cannot be negative. For a detailed discussion on this refer to [[DOE References|Searle(1997, 1971)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doet6.6.png|thumb|center|400px|Example of an unbalanced design.]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
The correct sum of squares can be calculated as shown next. The &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; matrices for the design of the last table can be written as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;y=\left[ \begin{matrix}&lt;br /&gt;
   6  \\&lt;br /&gt;
   4  \\&lt;br /&gt;
   6  \\&lt;br /&gt;
   12  \\&lt;br /&gt;
   42  \\&lt;br /&gt;
\end{matrix} \right]\text{   and   }X=\left[ \begin{matrix}&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; 1 &amp;amp; -1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; -1 &amp;amp; 1  \\&lt;br /&gt;
   1 &amp;amp; -1 &amp;amp; 1 &amp;amp; -1  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; can be calculated as:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;S{{S}_{AB}}={{y}^{\prime }}[H-(1/5)J]y-{{y}^{\prime }}[{{H}_{\tilde{\ }AB}}-(1/5)J]y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is the hat matrix and &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the matrix of ones. The matrix &amp;lt;math&amp;gt;{{H}_{\tilde{\ }AB}}\,\!&amp;lt;/math&amp;gt; can be calculated using &amp;lt;math&amp;gt;{{H}_{\tilde{\ }AB}}={{X}_{\tilde{\ }AB}}{{(X_{\tilde{\ }AB}^{\prime }{{X}_{\tilde{\ }AB}})}^{-1}}X_{\tilde{\ }AB}^{\prime }\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;{{X}_{\tilde{\ }AB}}\,\!&amp;lt;/math&amp;gt; is the design matrix, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, excluding the last column that represents the interaction effect &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt;. Thus, the sum of squares for the interaction &amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; S{{S}_{AB}}= &amp;amp; {{y}^{\prime }}[H-(1/5)J]y-{{y}^{\prime }}[{{H}_{\tilde{\ }AB}}-(1/5)J]y \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 368-339.4286 \\ &lt;br /&gt;
 &amp;amp; = &amp;amp; 28.5714  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the value that is calculated by DOE++ (see the first figure below, for the experiment design and the second figure below for the analysis).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.15.png|thumb|center|471px|Unbalanced experimental design for the data in the last table.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:doe6.16.png|thumb|center|471px|Analysis for the unbalanced data in the last table.]]&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=The_Exponential_Distribution&amp;diff=65103</id>
		<title>The Exponential Distribution</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=The_Exponential_Distribution&amp;diff=65103"/>
		<updated>2017-07-24T20:04:52Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* The 2-Parameter Exponential Distribution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|7|The Exponential Distribution}}&lt;br /&gt;
The exponential distribution is a commonly used distribution in reliability engineering. Mathematically, it is a fairly simple distribution, which many times leads to its use in inappropriate situations. It is, in fact, a special case of the Weibull distribution where &amp;lt;math&amp;gt;\beta =1\,\!&amp;lt;/math&amp;gt;. The exponential distribution is used to model the behavior of units that have a constant failure rate (or units that do not degrade with time or wear out).  &lt;br /&gt;
&lt;br /&gt;
==Exponential Probability Density Function==&lt;br /&gt;
===The 2-Parameter Exponential Distribution===&lt;br /&gt;
The 2-parameter exponential &#039;&#039;pdf&#039;&#039; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\lambda {{e}^{-\lambda (t-\gamma )}},f(t)\ge 0,\lambda &amp;gt;0,t\ge \gamma \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt; is the location parameter.&lt;br /&gt;
Some of the characteristics of the 2-parameter exponential distribution are discussed in Kececioglu [[Appendix:_Life_Data_Analysis_References|[19]]]:&lt;br /&gt;
*The location parameter, &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt;, if positive, shifts the beginning of the distribution by a distance of &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt; to the right of the origin, signifying that the chance failures start to occur only after &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt; hours of operation, and cannot occur before.&lt;br /&gt;
*The scale parameter is &amp;lt;math&amp;gt;\tfrac{1}{\lambda }=\bar{t}-\gamma =m-\gamma \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
*The exponential &#039;&#039;pdf&#039;&#039; has no shape parameter, as it has only one shape.&lt;br /&gt;
*The distribution starts at &amp;lt;math&amp;gt;t=\gamma \,\!&amp;lt;/math&amp;gt; at the level of &amp;lt;math&amp;gt;f(t=\gamma )=\lambda \,\!&amp;lt;/math&amp;gt; and decreases thereafter exponentially and monotonically as &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; increases beyond &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt; and is convex.&lt;br /&gt;
*As &amp;lt;math&amp;gt;t\to \infty \,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;f(t)\to 0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===The 1-Parameter Exponential Distribution===&lt;br /&gt;
The 1-parameter exponential &#039;&#039;pdf&#039;&#039; is obtained by setting &amp;lt;math&amp;gt;\gamma =0\,\!&amp;lt;/math&amp;gt;, and is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; \begin{align}f(t)= &amp;amp; \lambda {{e}^{-\lambda t}}=\frac{1}{m}{{e}^{-\tfrac{1}{m}t}}, &lt;br /&gt;
  &amp;amp; t\ge 0, \lambda &amp;gt;0,m&amp;gt;0&lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; = constant rate, in failures per unit of measurement, (e.g., failures per hour, per cycle, etc.)&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda =\frac{1}{m}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
::&amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; = mean time between failures, or to failure&lt;br /&gt;
::&amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; = operating time, life, or age, in hours, cycles, miles, actuations, etc.&lt;br /&gt;
&lt;br /&gt;
This distribution requires the knowledge of only one parameter, &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, for its application. Some of the characteristics of the 1-parameter exponential distribution are discussed in Kececioglu [[Appendix:_Life_Data_Analysis_References| [19]]]:&lt;br /&gt;
*The location parameter, &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt;, is zero.&lt;br /&gt;
*The scale parameter is &amp;lt;math&amp;gt;\tfrac{1}{\lambda }=m\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
*As &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is decreased in value, the distribution is stretched out to the right, and as &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is increased, the distribution is pushed toward the origin.&lt;br /&gt;
*This distribution has no shape parameter as it has only one shape, (i.e., the exponential, and the only parameter it has is the failure rate, &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
*The distribution starts at &amp;lt;math&amp;gt;t=0\,\!&amp;lt;/math&amp;gt; at the level of &amp;lt;math&amp;gt;f(t=0)=\lambda \,\!&amp;lt;/math&amp;gt; and decreases thereafter exponentially and monotonically as &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; increases, and is convex.&lt;br /&gt;
*As &amp;lt;math&amp;gt;t\to \infty \,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;f(t)\to 0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
*The &#039;&#039;pdf&#039;&#039; can be thought of as a special case of the Weibull &#039;&#039;pdf&#039;&#039; with &amp;lt;math&amp;gt;\gamma =0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta =1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Exponential Distribution Functions==&lt;br /&gt;
{{:Exponential Distribution Functions}}&lt;br /&gt;
&lt;br /&gt;
==Characteristics of the Exponential Distribution==&lt;br /&gt;
{{:Exponential Distribution Characteristics}}&lt;br /&gt;
&lt;br /&gt;
==Estimation of the Exponential Parameters==&lt;br /&gt;
===Probability Plotting===&lt;br /&gt;
Estimation of the parameters for the exponential distribution via probability plotting is very similar to the process used when dealing with the Weibull distribution. Recall, however, that the appearance of the probability plotting paper and the methods by which the parameters are estimated vary from distribution to distribution, so there will be some noticeable differences. In fact, due to the nature of the exponential &#039;&#039;cdf&#039;&#039;, the exponential probability plot is the only one with a negative slope. This is because the y-axis of the exponential probability plotting paper represents the reliability, whereas the y-axis for most of the other life distributions represents the unreliability.&lt;br /&gt;
&lt;br /&gt;
This is illustrated in the process of linearizing the &#039;&#039;cdf&#039;&#039;, which is necessary to construct the exponential probability plotting paper. For the two-parameter exponential distribution the cumulative density function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
F(t)=1-{{e}^{-\lambda (t-\gamma )}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Taking the natural logarithm of both sides of the above equation yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln \left[ 1-F(t) \right]=-\lambda (t-\gamma )\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\ln [1-F(t)]=\lambda \gamma -\lambda t&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y=\ln [1-F(t)]&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
a=\lambda \gamma &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
b=-\lambda &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which results in the linear equation of:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y=a+bt&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that with the exponential probability plotting paper, the y-axis scale is logarithmic and the x-axis scale is linear. This means that the zero value is present only on the x-axis. For &amp;lt;math&amp;gt;t=0\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;R=1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F(t)=0\,\!&amp;lt;/math&amp;gt;. So if we were to use &amp;lt;math&amp;gt;F(t)\,\!&amp;lt;/math&amp;gt; for the y-axis, we would have to plot the point &amp;lt;math&amp;gt;(0,0)\,\!&amp;lt;/math&amp;gt;. However, since the y-axis is logarithmic, there is no place to plot this on the exponential paper. Also, the failure rate, &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;, is the negative of the slope of the line, but there is an easier way to determine the value of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; from the probability plot, as will be illustrated in the following example.&lt;br /&gt;
====Plotting Example====&lt;br /&gt;
{{:1P Exponential Example}}&lt;br /&gt;
&lt;br /&gt;
===Rank Regression on Y===&lt;br /&gt;
Performing a rank regression on Y requires that a straight line be fitted to the set of available data points such that the sum of the squares of the vertical deviations from the points to the line is minimized.&lt;br /&gt;
The least squares parameter estimation method (regression analysis) was discussed in [[Parameter Estimation]], and the following equations for rank regression on Y (RRY) were derived:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\bar{y}-\hat{b}\bar{x}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In our case, the equations for &amp;lt;math&amp;gt;{{y}_{i}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{i}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{y}_{i}}=\ln [1-F({{t}_{i}})] &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{x}_{i}}={{t}_{i}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the &amp;lt;math&amp;gt;F({{t}_{i}})\,\!&amp;lt;/math&amp;gt; is estimated from the median ranks. Once &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{b}\,\!&amp;lt;/math&amp;gt; are obtained, then &amp;lt;math&amp;gt;\hat{\lambda }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{\gamma }\,\!&amp;lt;/math&amp;gt; can easily be obtained from above equations.&lt;br /&gt;
For the one-parameter exponential, equations for estimating &#039;&#039;a&#039;&#039; and &#039;&#039;b&#039;&#039; become:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   \hat{a}= &amp;amp; 0, \\ &lt;br /&gt;
  \hat{b}= &amp;amp; \frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Correlation Coefficient&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The estimator of &amp;lt;math&amp;gt;\rho \,\!&amp;lt;/math&amp;gt; is the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,({{x}_{i}}-\overline{x})({{y}_{i}}-\overline{y})}{\sqrt{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{({{x}_{i}}-\overline{x})}^{2}}\cdot \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{({{y}_{i}}-\overline{y})}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====RRY Example==== &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER SECTIONS IN THIS PAGE. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
{{:2P_Exponential_Example}}&lt;br /&gt;
&lt;br /&gt;
===Rank Regression on X===&lt;br /&gt;
Similar to rank regression on Y, performing a rank regression on X requires that a straight line be fitted to a set of data points such that the sum of the squares of the horizontal deviations from the points to the line is minimized.&lt;br /&gt;
&lt;br /&gt;
Again the first task is to bring our exponential &#039;&#039;cdf&#039;&#039; function into a linear form. This step is exactly the same as in regression on Y analysis. The deviation from the previous analysis begins on the least squares fit step, since in this case we treat &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; as the dependent variable and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; as the independent variable. The best-fitting straight line to the data, for regression on X (see [[Parameter Estimation]]), is the straight line:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\hat{a}+\hat{b}y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding equations for &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{b}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{y}_{i}}=\ln [1-F({{t}_{i}})] &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{x}_{i}}={{t}_{i}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The values of &amp;lt;math&amp;gt;F({{t}_{i}})\,\!&amp;lt;/math&amp;gt; are estimated from the median ranks. Once &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{b}\,\!&amp;lt;/math&amp;gt; are obtained, solve for the unknown &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; value, which corresponds to:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=-\frac{\hat{a}}{\hat{b}}+\frac{1}{\hat{b}}x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for the parameters from above equations we get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;a=-\frac{\hat{a}}{\hat{b}}=\lambda \gamma \Rightarrow \gamma =\hat{a}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=\frac{1}{\hat{b}}=-\lambda \Rightarrow \lambda =-\frac{1}{\hat{b}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the one-parameter exponential case, equations for estimating a and b become:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   \hat{a}= &amp;amp; 0 \\ &lt;br /&gt;
  \hat{b}= &amp;amp; \frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The correlation coefficient is evaluated as before.&lt;br /&gt;
&lt;br /&gt;
====RRX Example====&lt;br /&gt;
&#039;&#039;&#039;2-Parameter Exponential RRX Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Using the same data set from the [[The_Exponential_Distribution#RRY_Example|RRY example above]] and assuming a 2-parameter exponential distribution, estimate the parameters and determine the correlation coefficient estimate, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, using rank regression on X.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The table constructed for the RRY analysis applies to this example also. Using the values from this table, we get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   \hat{b}= &amp;amp; \frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{t}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{t}_{i}}\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}}{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{14}} \\ &lt;br /&gt;
   \\ &lt;br /&gt;
  \hat{b}= &amp;amp; \frac{-927.4899-(630)(-13.2315)/14}{22.1148-{{(-13.2315)}^{2}}/14}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=-34.5563\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{t}_{i}}}{14}-\hat{b}\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\frac{630}{14}-(-34.5563)\frac{(-13.2315)}{14}=12.3406\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\lambda }=-\frac{1}{\hat{b}}=-\frac{1}{(-34.5563)}=0.0289\text{ failures/hour}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\gamma }=\hat{a}=12.3406\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The correlation coefficient is found to be:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=-0.9679\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the equation for regression on Y is not necessarily the same as that for the regression on X. The only time when the two regression methods yield identical results is when the data lie perfectly on a line. If this were the case, the correlation coefficient would be &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;. The negative value of the correlation coefficient is due to the fact that the slope of the exponential probability plot is negative.&lt;br /&gt;
&lt;br /&gt;
This example can be repeated using Weibull++, choosing two-parameter exponential and rank regression on X (RRX) methods for analysis, as shown below.&lt;br /&gt;
The estimated parameters and the correlation coefficient using Weibull++ were found to be:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{array}{*{35}{l}}&lt;br /&gt;
   \hat{\lambda }= &amp;amp;0.0289 \text{failures/hour} \\&lt;br /&gt;
   \hat{\gamma}= &amp;amp; 12.3395 \text{hours} \\&lt;br /&gt;
   \hat{\rho} = &amp;amp;-0.9679  \\&lt;br /&gt;
\end{array}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Exponential Distribution Example 3 Data Folio.png|center|700px|]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The probability plot can be obtained simply by clicking the &#039;&#039;&#039;Plot&#039;&#039;&#039; icon.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Exponential Distribution Example 3 Plot.png|center|600px|]]&lt;br /&gt;
&lt;br /&gt;
===Maximum Likelihood Estimation===&lt;br /&gt;
As outlined in [[Parameter Estimation]], maximum likelihood estimation works by developing a likelihood function based on the available data and finding the values of the parameter estimates that maximize the likelihood function. This can be achieved by using iterative methods to determine the parameter estimate values that maximize the likelihood function. This can be rather difficult and time-consuming, particularly when dealing with the three-parameter distribution. Another method of finding the parameter estimates involves taking the partial derivatives of the likelihood equation with respect to the parameters, setting the resulting equations equal to zero, and solving simultaneously to determine the values of the parameter estimates. The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the exponential distribution are covered in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
====MLE Example====&lt;br /&gt;
&#039;&#039;&#039;MLE for the Exponential Distribution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Using the same data set from the [[The_Exponential_Distribution#RRY_Example|RRY and RRX examples above]] and assuming a 2-parameter exponential distribution, estimate the parameters using the MLE method.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this example, we have complete data only. The partial derivative of the log-likelihood function, &amp;lt;math&amp;gt;\Lambda ,\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial \Lambda }{\partial \lambda }=\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,\left[ \frac{1}{\lambda }-\left( {{T}_{i}}-\gamma  \right) \right]=\underset{i=1}{\overset{14}{\mathop \sum }}\,\left[ \frac{1}{\lambda }-\left( {{T}_{i}}-\gamma  \right) \right]=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Complete descriptions of the partial derivatives can be found in [[Appendix:_Log-Likelihood_Equations|Appendix D]]. Recall that when using the MLE method for the exponential distribution, the value of &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt; is equal to that of the first failure time. The first failure occurred at 5 hours, thus &amp;lt;math&amp;gt;\gamma =5\,\!&amp;lt;/math&amp;gt; hours&amp;lt;math&amp;gt;.\,\!&amp;lt;/math&amp;gt; Substituting the values for &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt; we get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{14}{\hat{\lambda }}=560\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\lambda }=0.025\text{ failures/hour}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using Weibull++:&lt;br /&gt;
&lt;br /&gt;
[[Image:Exponential Distribution Example 4 Data.png|center|700px|]]&lt;br /&gt;
&lt;br /&gt;
The probability plot is:&lt;br /&gt;
&lt;br /&gt;
[[Image:Exponential Distribution Example 4 Plot.png|center|650px|]]&lt;br /&gt;
&lt;br /&gt;
==Confidence Bounds==&lt;br /&gt;
In this section, we present the methods used in the application to estimate the different types of confidence bounds for exponentially distributed data. The complete derivations were presented in detail (for a general function) in the chapter for [[Confidence Bounds]].&lt;br /&gt;
At this time we should point out that exact confidence bounds for the exponential distribution have been derived, and exist in a closed form, utilizing the &amp;lt;math&amp;gt;{{\chi }^{2}}\,\!&amp;lt;/math&amp;gt; distribution. These are described in detail in Kececioglu  [[Appendix:_Life_Data_Analysis_References|[20]]], and are covered in the section in the [[Reliability Test Design|test design chapter]]. For most exponential data analyses, Weibull++ will use the approximate confidence bounds, provided from the Fisher information matrix or the likelihood ratio, in order to stay consistent with all of the other available distributions in the application. The &amp;lt;math&amp;gt;{{\chi }^{2}}\,\!&amp;lt;/math&amp;gt; confidence bounds for the exponential distribution are discussed in more detail in the [[Reliability Test Design|test design chapter]].&lt;br /&gt;
&lt;br /&gt;
===Fisher Matrix Bounds===&lt;br /&gt;
====Bounds on the Parameters====&lt;br /&gt;
For the failure rate &amp;lt;math&amp;gt;\hat{\lambda }\,\!&amp;lt;/math&amp;gt; the upper (&amp;lt;math&amp;gt;{{\lambda }_{U}}\,\!&amp;lt;/math&amp;gt;) and lower (&amp;lt;math&amp;gt;{{\lambda }_{L}}\,\!&amp;lt;/math&amp;gt;) bounds are estimated by Nelson [[Appendix:_Life_Data_Analysis_References|[30]]]:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\lambda }_{U}}= &amp;amp; \hat{\lambda }\cdot {{e}^{\left[ \tfrac{{{K}_{\alpha }}\sqrt{Var(\hat{\lambda })}}{\hat{\lambda }} \right]}} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; {{\lambda }_{L}}= &amp;amp; \frac{\hat{\lambda }}{{{e}^{\left[ \tfrac{{{K}_{\alpha }}\sqrt{Var(\hat{\lambda })}}{\hat{\lambda }} \right]}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level, then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds, and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds.&lt;br /&gt;
&lt;br /&gt;
The variance of &amp;lt;math&amp;gt;\hat{\lambda },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;Var(\hat{\lambda }),\,\!&amp;lt;/math&amp;gt; is estimated from the Fisher matrix, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var(\hat{\lambda })={{\left( -\frac{{{\partial }^{2}}\Lambda }{\partial {{\lambda }^{2}}} \right)}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Lambda \,\!&amp;lt;/math&amp;gt; is the log-likelihood function of the exponential distribution, described in [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
Note that no true MLE solution exists for the case of the two-parameter exponential distribution. The mathematics simply break down while trying to simultaneously solve the partial derivative equations for both the &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; parameters, resulting in unrealistic conditions. The way around this conundrum involves setting &amp;lt;math&amp;gt;\gamma ={{t}_{1}},\,\!&amp;lt;/math&amp;gt; or the first time-to-failure, and calculating &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; in the regular fashion for this methodology. Weibull++ treats &amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt; as a constant when computing bounds, (i.e., &amp;lt;math&amp;gt;Var(\hat{\gamma })=0\,\!&amp;lt;/math&amp;gt;). (See the discussion in [[Appendix:_Log-Likelihood_Equations|Appendix D]] for more information.)&lt;br /&gt;
&lt;br /&gt;
====Bounds on Reliability====&lt;br /&gt;
The reliability of the two-parameter exponential distribution is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{R}(t;\hat{\lambda })={{e}^{-\hat{\lambda }(t-\hat{\gamma })}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding confidence bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-{{\lambda }_{U}}(t-\hat{\gamma })}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-{{\lambda }_{L}}(t-\hat{\gamma })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These equations hold true for the 1-parameter exponential distribution, with &amp;lt;math&amp;gt;\gamma =0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Bounds on Time====&lt;br /&gt;
The bounds around time for a given exponential percentile, or reliability value, are estimated by first solving the reliability equation with respect to time, or reliable life:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{t}=-\frac{1}{{\hat{\lambda }}}\cdot \ln (R)+\hat{\gamma }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding confidence bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{t}_{U}}= &amp;amp; -\frac{1}{{{\lambda }_{L}}}\cdot \ln (R)+\hat{\gamma } \\ &lt;br /&gt;
 &amp;amp; {{t}_{L}}= &amp;amp; -\frac{1}{{{\lambda }_{U}}}\cdot \ln (R)+\hat{\gamma }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same equations apply for the one-parameter exponential with &amp;lt;math&amp;gt;\gamma =0.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Likelihood Ratio Confidence Bounds===&lt;br /&gt;
====Bounds on Parameters====&lt;br /&gt;
For one-parameter distributions such as the exponential, the likelihood confidence bounds are calculated by finding values for &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; that satisfy:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;-2\cdot \text{ln}\left( \frac{L(\theta )}{L(\hat{\theta })} \right)=\chi _{\alpha ;1}^{2}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This equation can be rewritten as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(\theta )=L(\hat{\theta })\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For complete data, the likelihood function for the exponential distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(\lambda )=\underset{i=1}{\overset{N}{\mathop \prod }}\,f({{t}_{i}};\lambda )=\underset{i=1}{\overset{N}{\mathop \prod }}\,\lambda \cdot {{e}^{-\lambda \cdot {{t}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; values represent the original time-to-failure data.  For a given value of &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;, values for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be found which represent the maximum and minimum values that satisfy the above likelihood ratio equation. These represent the confidence bounds for the parameters at a confidence level &amp;lt;math&amp;gt;\delta ,\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\alpha =\delta \,\!&amp;lt;/math&amp;gt; for two-sided bounds and &amp;lt;math&amp;gt;\alpha =2\delta -1\,\!&amp;lt;/math&amp;gt; for one-sided.&lt;br /&gt;
&lt;br /&gt;
=====Example: LR Bounds for Lambda=====&lt;br /&gt;
Five units are put on a reliability test and experience failures at 20, 40, 60, 100, and 150 hours. Assuming an exponential distribution, the MLE parameter estimate is calculated to be &amp;lt;math&amp;gt;\hat{\lambda }=0.013514\,\!&amp;lt;/math&amp;gt;.  Calculate the 85% two-sided confidence bounds on these parameters using the likelihood ratio method.&lt;br /&gt;
  &lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The first step is to calculate the likelihood function for the parameter estimates:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  L(\hat{\lambda })= &amp;amp; \underset{i=1}{\overset{N}{\mathop \prod }}\,f({{x}_{i}};\hat{\lambda })=\underset{i=1}{\overset{N}{\mathop \prod }}\,\hat{\lambda }\cdot {{e}^{-\hat{\lambda }\cdot {{x}_{i}}}} \\ &lt;br /&gt;
  L(\hat{\lambda })= &amp;amp; \underset{i=1}{\overset{5}{\mathop \prod }}\,0.013514\cdot {{e}^{-0.013514\cdot {{x}_{i}}}} \\ &lt;br /&gt;
  L(\hat{\lambda })= &amp;amp; 3.03647\times {{10}^{-12}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{x}_{i}}\,\!&amp;lt;/math&amp;gt; are the original time-to-failure data points. We can now rearrange the likelihood ratio equation to the form:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(\lambda )-L(\hat{\lambda })\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since our specified confidence level, &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt;, is 85%, we can calculate the value of the chi-squared statistic, &amp;lt;math&amp;gt;\chi _{0.85;1}^{2}=2.072251.\,\!&amp;lt;/math&amp;gt; We can now substitute this information into the equation:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L(\lambda )-L(\hat{\lambda })\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}= &amp;amp; 0, \\ &lt;br /&gt;
  L(\lambda )-3.03647\times {{10}^{-12}}\cdot {{e}^{\tfrac{-2.072251}{2}}}= &amp;amp; 0, \\ &lt;br /&gt;
  L(\lambda )-1.07742\times {{10}^{-12}}= &amp;amp; 0.  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It now remains to find the values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; which satisfy this equation. Since there is only one parameter, there are only two values of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; that will satisfy the equation.  These values represent the &amp;lt;math&amp;gt;\delta =85%\,\!&amp;lt;/math&amp;gt; two-sided confidence limits of the parameter estimate &amp;lt;math&amp;gt;\hat{\lambda }\,\!&amp;lt;/math&amp;gt;. For our problem, the confidence limits are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{\lambda }_{0.85}}=(0.006572,0.024172)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Bounds on Time and Reliability====&lt;br /&gt;
In order to calculate the bounds on a time estimate for a given reliability, or on a reliability estimate for a given time, the likelihood function needs to be rewritten in terms of one parameter and time/reliability, so that the maximum and minimum values of the time can be observed as the parameter is varied. This can be accomplished by substituting a form of the exponential reliability equation into the likelihood function. The exponential reliability equation can be written as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R={{e}^{-\lambda \cdot t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be rearranged to the form:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda =\frac{-\text{ln}(R)}{t}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This equation can now be substituted into the likelihood ratio equation to produce a likelihood equation in terms of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R:\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(t/R)=\underset{i=1}{\overset{N}{\mathop \prod }}\,\left( \frac{-\text{ln}(R)}{t} \right)\cdot {{e}^{\left( \tfrac{\text{ln}(R)}{t} \right)\cdot {{x}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The unknown parameter &amp;lt;math&amp;gt;t/R\,\!&amp;lt;/math&amp;gt; depends on what type of bounds are being determined. If one is trying to determine the bounds on time for the equation for the mean and the Bayes&#039;s rule equation for single parametera given reliability, then &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is a known constant and &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; is the unknown parameter. Conversely, if one is trying to determine the bounds on reliability for a given time, then &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; is a known constant and &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the unknown parameter. Either way, the likelihood ratio function can be solved for the values of interest.&lt;br /&gt;
&lt;br /&gt;
=====Example: LR Bounds on Time =====&lt;br /&gt;
For the data given above for the [[The_Exponential_Distribution#Example:_LR_Bounds_for_Lambda|LR Bounds on Lambda example]] (five failures at 20, 40, 60, 100 and 150 hours), determine the 85% two-sided confidence bounds on the time estimate for a reliability of 90%. The ML estimate for the time at &amp;lt;math&amp;gt;R(t)=90%\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\hat{t}=7.797\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
  &lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this example, we are trying to determine the 85% two-sided confidence bounds on the time estimate of 7.797. This is accomplished by substituting &amp;lt;math&amp;gt;R=0.90\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha =0.85\,\!&amp;lt;/math&amp;gt; into the likelihood ratio bound equation. It now remains to find the values of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; which satisfy this equation. Since there is only one parameter, there are only two values of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; that will satisfy the equation. These values represent the &amp;lt;math&amp;gt;\delta =85%\,\!&amp;lt;/math&amp;gt; two-sided confidence limits of the time estimate &amp;lt;math&amp;gt;\hat{t}\,\!&amp;lt;/math&amp;gt;. For our problem, the confidence limits are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\hat{t}}_{R=0.9}}=(4.359,16.033)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Example: LR Bounds on Reliability=====&lt;br /&gt;
Again using the data given above for the [[The_Exponential_Distribution#Example:_LR_Bounds_for_Lambda|LR Bounds on Lambda example]] (five failures at 20, 40, 60, 100 and 150 hours), determine the 85% two-sided confidence bounds on the reliability estimate for a &amp;lt;math&amp;gt;t=50\,\!&amp;lt;/math&amp;gt;. The ML estimate for the time at &amp;lt;math&amp;gt;t=50\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\hat{R}=50.881%\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this example, we are trying to determine the 85% two-sided confidence bounds on the reliability estimate of 50.881%. This is accomplished by substituting &amp;lt;math&amp;gt;t=50\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha =0.85\,\!&amp;lt;/math&amp;gt; into the likelihood ratio bound equation. It now remains to find the values of &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; which satisfy this equation. Since there is only one parameter, there are only two values of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; that will satisfy the equation. These values represent the &amp;lt;math&amp;gt;\delta =85%\,\!&amp;lt;/math&amp;gt; two-sided confidence limits of the reliability estimate &amp;lt;math&amp;gt;\hat{R}\,\!&amp;lt;/math&amp;gt;. For our problem, the confidence limits are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\hat{R}}_{t=50}}=(29.861%,71.794%)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bayesian Confidence Bounds===&lt;br /&gt;
====Bounds on Parameters====&lt;br /&gt;
From [[Confidence Bounds]], we know that the posterior distribution of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; can be written as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(\lambda |Data)=\frac{L(Data|\lambda )\varphi (\lambda )}{\int_{0}^{\infty }L(Data|\lambda )\varphi (\lambda )d\lambda }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\varphi (\lambda )=\tfrac{1}{\lambda }\,\!&amp;lt;/math&amp;gt;, is the non-informative prior of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With the above prior distribution, &amp;lt;math&amp;gt;f(\lambda |Data)\,\!&amp;lt;/math&amp;gt; can be rewritten as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(\lambda |Data)=\frac{L(Data|\lambda )\tfrac{1}{\lambda }}{\int_{0}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The one-sided upper bound of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=P(\lambda \le {{\lambda }_{U}})=\int_{0}^{{{\lambda }_{U}}}f(\lambda |Data)d\lambda \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The one-sided lower bound of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;1-CL=P(\lambda \le {{\lambda }_{L}})=\int_{0}^{{{\lambda }_{L}}}f(\lambda |Data)d\lambda \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The two-sided bounds of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=P({{\lambda }_{L}}\le \lambda \le {{\lambda }_{U}})=\int_{{{\lambda }_{L}}}^{{{\lambda }_{U}}}f(\lambda |Data)d\lambda \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Bounds on Time (Type 1)====&lt;br /&gt;
The reliable life equation is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;t=\frac{-\ln R}{\lambda }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the one-sided upper bound on time we have:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(t\le {{T}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\frac{-\ln R}{\lambda }\le {{T}_{U}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above equation can be rewritten in terms of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\frac{-\ln R}{{{t}_{U}}}\le \lambda )\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the above posterior distribuiton equation, we have:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\frac{\int_{\tfrac{-\ln R}{{{t}_{U}}}}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }{\int_{0}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above equation is solved w.r.t. &amp;lt;math&amp;gt;{{t}_{U}}.\,\!&amp;lt;/math&amp;gt; The same method is applied for one-sided lower and two-sided bounds on time.&lt;br /&gt;
&lt;br /&gt;
====Bounds on Reliability (Type 2)====&lt;br /&gt;
The one-sided upper bound on reliability is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(R\le {{R}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\exp (-\lambda t)\le {{R}_{U}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above equaation can be rewritten in terms of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\frac{-\ln {{R}_{U}}}{t}\le \lambda )\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From the equation for posterior distribution we have:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\frac{\int_{\tfrac{-\ln {{R}_{U}}}{t}}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }{\int_{0}^{\infty }L(Data|\lambda )\tfrac{1}{\lambda }d\lambda }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above equation is solved w.r.t. &amp;lt;math&amp;gt;{{R}_{U}}.\,\!&amp;lt;/math&amp;gt; The same method can be used to calculate one-sided lower and two sided bounds on reliability.&lt;br /&gt;
&lt;br /&gt;
==Exponential Distribution Examples==&lt;br /&gt;
{{:Exponential Distribution Examples}}&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=The_Generalized_Gamma_Distribution&amp;diff=65101</id>
		<title>The Generalized Gamma Distribution</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=The_Generalized_Gamma_Distribution&amp;diff=65101"/>
		<updated>2017-07-14T22:06:38Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Generalized Gamma Reliability Function */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|12|The Generalized Gamma Distribution}}&lt;br /&gt;
While not as frequently used for modeling life data as the previous distributions, the generalized gamma distribution does have the ability to mimic the attributes of other distributions such as the Weibull or lognormal, based on the values of the distribution&#039;s parameters. While the generalized gamma distribution is not often used to model life data by itself (Mostly due to its mathematical complexity and its requirement of large sample sizes (&amp;gt;30) for convergence), its ability to behave like other more commonly-used life distributions is sometimes used to determine which of those life distributions should be used to model a particular set of data.&lt;br /&gt;
&lt;br /&gt;
===Generalized Gamma Probability Density Function===&lt;br /&gt;
The generalized gamma function is a 3-parameter distribution. One version of the generalized gamma distribution uses the parameters &#039;&#039;k&#039;&#039;, &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;. The &#039;&#039;pdf&#039;&#039; for this form of the generalized gamma distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{\beta }{\Gamma (k)\cdot \theta }{{\left( \frac{t}{\theta } \right)}^{k\beta -1}}{{e}^{-{{\left( \frac{t}{\theta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\theta &amp;gt;0\,\!&amp;lt;/math&amp;gt; is a scale parameter, &amp;lt;math&amp;gt;\beta &amp;gt;0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;k&amp;gt;0\,\!&amp;lt;/math&amp;gt; are shape parameters and &amp;lt;math&amp;gt;\Gamma (x)\,\!&amp;lt;/math&amp;gt; is the gamma function of &#039;&#039;x&#039;&#039;, which is defined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Gamma (x)=\int_{0}^{\infty }{{{s}^{x-1}}}\cdot {{e}^{-s}}ds\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this version of the distribution, however, convergence problems arise that severely limit its usefulness. Even with data sets containing 200 or more data points, the MLE methods may fail to converge. Further adding to the confusion is the fact that distributions with widely different values of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt; may appear almost identical, as discussed in Lawless [[Appendix:_Life_Data_Analysis_References|[21]]]. In order to overcome these difficulties, Weibull++ uses a reparameterization with parameters &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\theta \,\!&amp;lt;/math&amp;gt;, as shown in [[Appendix:_Life_Data_Analysis_References|[21]]], where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \mu =\ln (\theta )+\frac{1}{\beta }\cdot \ln \left( \frac{1}{{{\lambda }^{2}}} \right) \\ &lt;br /&gt;
 &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \sigma =\frac{1}{\beta \sqrt{k}} \\ &lt;br /&gt;
 &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \lambda =\frac{1}{\sqrt{k}} \\ &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;-\infty &amp;lt;\mu &amp;lt;\infty ,\text{ }\sigma &amp;gt;0\text{, }0&amp;lt;\lambda .\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While this makes the distribution converge much more easily in computations, it does not facilitate manual manipulation of the equation. By allowing &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; to become negative, the &#039;&#039;pdf&#039;&#039; of the reparameterized distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\left\{ \begin{matrix}&lt;br /&gt;
   \tfrac{|\lambda |}{\sigma \cdot t}\cdot \tfrac{1}{\Gamma \left( \tfrac{1}{{{\lambda }^{2}}} \right)}\cdot {{e}^{\left[ \tfrac{\lambda \cdot \tfrac{\text{ln}(t)-\mu }{\sigma }+\text{ln}\left( \tfrac{1}{{{\lambda }^{2}}} \right)-{{e}^{\lambda \cdot \tfrac{\text{ln}(t)-\mu }{\sigma }}}}{{{\lambda }^{2}}} \right]}}\text{ if }\lambda \ne 0  \\&lt;br /&gt;
   \tfrac{1}{t\cdot \sigma \sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}(t)-\mu }{\sigma } \right)}^{2}}}}\text{                            if }\lambda =0  \\&lt;br /&gt;
\end{matrix} \right.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Generalized Gamma Reliability Function===&lt;br /&gt;
The reliability function for the generalized gamma distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t)=\left\{ \begin{array}{*{35}{l}}&lt;br /&gt;
   1-{{\Gamma }_{I}}\left( \tfrac{1}{{{\lambda }^{2}}};\tfrac{{{e}^{\lambda \left( \tfrac{\text{ln}(t)-\mu }{\sigma } \right)}}}{{{\lambda }^{2}}} \right)\text{ if }\lambda &amp;gt;0  \\&lt;br /&gt;
   1-\Phi \left( \tfrac{\text{ln}(t)-\mu }{\sigma } \right)\text{               if }\lambda =0  \\&lt;br /&gt;
   {{\Gamma }_{I}}\left( \tfrac{1}{{{\lambda }^{2}}};\tfrac{{{e}^{\lambda \left( \tfrac{\text{ln}(t)-\mu }{\sigma } \right)}}}{{{\lambda }^{2}}} \right)\text{       if }\lambda &amp;lt;0  \\&lt;br /&gt;
\end{array} \right.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z}{{e}^{-\tfrac{{{x}^{2}}}{2}}}dx\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and &amp;lt;math&amp;gt;{{\Gamma }_{I}}(k;x)\,\!&amp;lt;/math&amp;gt; is the incomplete gamma function of &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
and &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, which is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\Gamma }_{I}}(k;x)=\frac{1}{\Gamma (k)}\int_{0}^{x}{{s}^{k-1}}{{e}^{-s}}ds\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Gamma (x)\,\!&amp;lt;/math&amp;gt; is the gamma function of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Note that in Weibull++ the probability plot of the generalized gamma is created on lognormal probability paper. This means that the fitted line will not be straight unless &amp;lt;math&amp;gt;\lambda =0.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Generalized Gamma Failure Rate Function===&lt;br /&gt;
As defined in [[Basic Statistical Background]], the failure rate function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t)=\frac{f(t)}{R(t)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Owing to the complexity of the equations involved, the function will not be displayed here, but the failure rate function for the generalized gamma distribution can be obtained merely by dividing the &#039;&#039;pdf&#039;&#039; function by the reliability function.&lt;br /&gt;
&lt;br /&gt;
===Generalized Gamma Reliable Life===&lt;br /&gt;
The reliable life, &amp;lt;math&amp;gt;{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, of a unit for a specified reliability, starting the mission at age zero, is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{T}_{R}}=\left\{ \begin{array}{*{35}{l}}&lt;br /&gt;
   {{e}^{\mu +\tfrac{\sigma }{\lambda }\ln \left[ {{\lambda }^{2}}\Gamma _{I}^{-1}\left( 1-R,\tfrac{1}{{{\lambda }^{2}}} \right) \right]}}\text{  if }\lambda &amp;gt;0  \\&lt;br /&gt;
   {{\Phi }^{-1}}(1-R)\text{                  if }\lambda =0  \\&lt;br /&gt;
   {{e}^{\mu +\tfrac{\sigma }{\lambda }\ln \left[ {{\lambda }^{2}}\Gamma _{I}^{-1}\left( R,\tfrac{1}{{{\lambda }^{2}}} \right) \right]}}\text{     if }\lambda &amp;lt;0  \\&lt;br /&gt;
\end{array} \right.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Characteristics of the Generalized Gamma Distribution==&lt;br /&gt;
As mentioned previously, the generalized gamma distribution includes other distributions as special cases based on the values of the parameters. &lt;br /&gt;
&lt;br /&gt;
[[Image:WB chp12 pdf.png|center|400px| ]] &lt;br /&gt;
&lt;br /&gt;
:*	The Weibull distribution is a special case when &amp;lt;math&amp;gt;\lambda =1\,\!&amp;lt;/math&amp;gt; and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \beta = \frac{1}{\sigma } \\ &lt;br /&gt;
 &amp;amp; \eta = \exp (\mu )  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:*	 In this case, the generalized distribution has the same behavior as the Weibull for &amp;lt;math&amp;gt;\sigma &amp;gt;1,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\sigma =1,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma &amp;lt;1\,\!&amp;lt;/math&amp;gt; ( &amp;lt;math&amp;gt;\beta &lt;br /&gt;
&amp;lt;1,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\beta =1,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta &amp;gt;1\,\!&amp;lt;/math&amp;gt; respectively).&lt;br /&gt;
:*	The exponential distribution is a special case when &amp;lt;math&amp;gt;\lambda =1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma =1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*	The lognormal distribution is a special case when &amp;lt;math&amp;gt;\lambda =0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*	The gamma distribution is a special case when &amp;lt;math&amp;gt;\lambda =\sigma \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By allowing &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; to take negative values, the generalized gamma distribution can be further extended to include additional distributions as special cases. For example, the Fréchet distribution of maxima (also known as a reciprocal Weibull) is a special case when &amp;lt;math&amp;gt;\lambda =-1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Confidence Bounds==&lt;br /&gt;
The only method available in Weibull++ for confidence bounds for the generalized gamma distribution is the Fisher matrix, which is described next.&lt;br /&gt;
&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
The lower and upper bounds on the parameter &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\mu }_{U}}= &amp;amp; \widehat{\mu }+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{\mu }_{L}}= &amp;amp; \widehat{\mu }-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the parameter &amp;lt;math&amp;gt;\widehat{\sigma }\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\ln (\widehat{\sigma })\,\!&amp;lt;/math&amp;gt; is treated as normally distributed, and the bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma }_{U}}= \widehat{\sigma }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{\widehat{\sigma }}}}\text{ (upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{\sigma }_{L}}= \frac{\widehat{\sigma }}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\sigma })}}{\widehat{\sigma }}}}}\text{ (lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the parameter &amp;lt;math&amp;gt;\lambda ,\,\!&amp;lt;/math&amp;gt; the bounds are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\lambda }_{U}}= &amp;amp; \widehat{\lambda }+{{K}_{\alpha }}\sqrt{Var(\widehat{\lambda })}\text{ (upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{\lambda }_{L}}= &amp;amp; \widehat{\lambda }-{{K}_{\alpha }}\sqrt{Var(\widehat{\lambda })}\text{ (lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level, then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds, and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds.&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;\widehat{\mu }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{\sigma }\,\!&amp;lt;/math&amp;gt; are estimated as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \left( \begin{matrix}&lt;br /&gt;
   \widehat{Var}\left( \widehat{\mu } \right) &amp;amp; \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right) &amp;amp; \widehat{Cov}\left( \widehat{\mu },\widehat{\lambda } \right)  \\&lt;br /&gt;
   \widehat{Cov}\left( \widehat{\sigma },\widehat{\mu } \right) &amp;amp; \widehat{Var}\left( \widehat{\sigma } \right) &amp;amp; \widehat{Cov}\left( \widehat{\sigma },\widehat{\lambda } \right)  \\&lt;br /&gt;
   \widehat{Cov}\left( \widehat{\lambda },\widehat{\mu } \right) &amp;amp; \widehat{Cov}\left( \widehat{\lambda },\widehat{\sigma } \right) &amp;amp; \widehat{Var}\left( \widehat{\lambda } \right)  \\&lt;br /&gt;
\end{matrix} \right) \\ &lt;br /&gt;
  = \left( \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\mu }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \lambda }  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \sigma } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \lambda \partial \sigma }  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial \lambda } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \lambda \partial \sigma } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\lambda }^{2}}}  \\&lt;br /&gt;
\end{matrix} \right)_{\mu =\widehat{\mu },\sigma =\widehat{\sigma },\lambda =\hat{\lambda }}^{-1}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;math&amp;gt;\Lambda \,\!&amp;lt;/math&amp;gt; is the log-likelihood function of the generalized gamma distribution.&lt;br /&gt;
&lt;br /&gt;
===Bounds on Reliability===&lt;br /&gt;
The upper and lower bounds on reliability are given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \frac{{\hat{R}}}{\hat{R}+(1-\hat{R}){{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{R})}}{\hat{R}(1-\hat{R})}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \frac{{\hat{R}}}{\hat{R}+(1-\hat{R}){{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{R})}}{\hat{R}(1-\hat{R})}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Var(\widehat{R})= &amp;amp; {{\left( \frac{\partial R}{\partial \mu } \right)}^{2}}Var(\widehat{\mu })+{{\left( \frac{\partial R}{\partial \sigma } \right)}^{2}}Var(\widehat{\sigma })+{{\left( \frac{\partial R}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda })\\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial R}{\partial \mu } \right)\left( \frac{\partial R}{\partial \sigma } \right)Cov(\widehat{\mu },\widehat{\sigma })+2\left( \frac{\partial R}{\partial \mu } \right)\left( \frac{\partial R}{\partial \lambda } \right)Cov(\widehat{\mu },\widehat{\lambda })\\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial R}{\partial \lambda } \right)\left( \frac{\partial R}{\partial \sigma } \right)Cov(\widehat{\lambda },\widehat{\sigma })  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bounds on Time===&lt;br /&gt;
The bounds around time for a given percentile, or unreliability, are estimated by first solving the reliability equation with respect to time. Since &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is a positive variable, the transformed variable &amp;lt;math&amp;gt;\hat{u}=\ln (\widehat{T})\,\!&amp;lt;/math&amp;gt; is treated as normally distributed and the bounds are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{u}_{u}}= &amp;amp; \ln {{T}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})} \\ &lt;br /&gt;
 &amp;amp; {{u}_{L}}= &amp;amp; \ln {{T}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for &amp;lt;math&amp;gt;{{T}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{T}_{L}}\,\!&amp;lt;/math&amp;gt; we get: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{{{T}_{U}}}}\text{ (upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{{{T}_{L}}}}\text{ (lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variance of &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; is estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
&amp;amp; Var(\widehat{u})= {{\left( \frac{\partial u}{\partial \mu } \right)}^{2}}Var(\widehat{\mu })+{{\left( \frac{\partial u}{\partial \sigma } \right)}^{2}}Var(\widehat{\sigma })+{{\left( \frac{\partial u}{\partial \lambda } \right)}^{2}}Var(\widehat{\lambda })\\ &lt;br /&gt;
&amp;amp;  +2\left( \frac{\partial u}{\partial \mu } \right)\left( \frac{\partial u}{\partial \sigma } \right)Cov(\widehat{\mu },\widehat{\sigma })+2\left( \frac{\partial u}{\partial \mu } \right)\left( \frac{\partial u}{\partial \lambda } \right)Cov(\widehat{\mu },\widehat{\lambda })\\ &lt;br /&gt;
&amp;amp;  +2\left( \frac{\partial u}{\partial \lambda } \right)\left( \frac{\partial u}{\partial \sigma } \right)Cov(\widehat{\lambda },\widehat{\sigma })  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Example==&lt;br /&gt;
{{:Generalized Gamma Distribution Example}}&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Template:Three-parameter_weibull_distribution&amp;diff=65093</id>
		<title>Template:Three-parameter weibull distribution</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Template:Three-parameter_weibull_distribution&amp;diff=65093"/>
		<updated>2017-07-06T17:50:05Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: Removed t&amp;gt;=0.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The 3-parameter Weibull &#039;&#039;pdf&#039;&#039; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(t)={ \frac{\beta }{\eta }}\left( {\frac{t-\gamma }{\eta }}\right) ^{\beta -1}e^{-\left( {\frac{t-\gamma }{\eta }}\right) ^{\beta }} \,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; f(t)\geq 0,\text{ }t\geq \gamma \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\beta&amp;gt;0\ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; \eta &amp;gt; 0 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; -\infty &amp;lt; \gamma &amp;lt; +\infty \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; \eta= \,\!&amp;lt;/math&amp;gt; scale parameter, or characteristic life &lt;br /&gt;
::&amp;lt;math&amp;gt; \beta= \,\!&amp;lt;/math&amp;gt; shape parameter (or slope)&lt;br /&gt;
::&amp;lt;math&amp;gt; \gamma= \,\!&amp;lt;/math&amp;gt; location parameter (or failure free life)&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=The_Lognormal_Distribution&amp;diff=65092</id>
		<title>The Lognormal Distribution</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=The_Lognormal_Distribution&amp;diff=65092"/>
		<updated>2017-06-30T17:05:18Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Lognormal Probability Density Function */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|10|The Lognormal Distribution}}&lt;br /&gt;
&lt;br /&gt;
The lognormal distribution is commonly used to model the lives of units whose failure modes are of a fatigue-stress nature. Since this includes most, if not all, mechanical systems, the lognormal distribution can have widespread application. Consequently, the lognormal distribution is a good companion to the Weibull distribution when attempting to model these types of units.&lt;br /&gt;
As may be surmised by the name, the lognormal distribution has certain similarities to the normal distribution. A random variable is lognormally distributed if the logarithm of the random variable is normally distributed. Because of this, there are many mathematical similarities between the two distributions.  For example, the mathematical reasoning for the construction of the probability plotting scales and the bias of parameter estimators is very similar for these two distributions.&lt;br /&gt;
&lt;br /&gt;
==Lognormal Probability Density Function==&lt;br /&gt;
The lognormal distribution is a 2-parameter distribution with parameters &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma&#039;\,\!&amp;lt;/math&amp;gt;. The &#039;&#039;pdf&#039;&#039; for this distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f({t}&#039;)=\frac{1}{{{\sigma&#039; }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}^{\prime }}-{\mu }&#039;}{{{\sigma&#039; }}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{t}&#039;=\ln (t)\,\!&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; values are the times-to-failure &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\mu&#039;\,\!&amp;lt;/math&amp;gt; = mean of the natural logarithms of the times-to-failure&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\sigma&#039;\,\!&amp;lt;/math&amp;gt; = standard deviation of the natural logarithms of the times-to-failure&lt;br /&gt;
&lt;br /&gt;
The lognormal &#039;&#039;pdf&#039;&#039; can be obtained, realizing that for equal probabilities under the normal and lognormal &#039;&#039;pdfs&#039;&#039;, incremental areas should also be equal, or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
f(t)dt=f({t}&#039;)d{t}&#039;&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Taking the derivative of the relationship between &amp;lt;math&amp;gt;{t}&#039;\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{t}\,\!&amp;lt;/math&amp;gt; yields: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;d{t}&#039;=\frac{dt}{t}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substitution yields: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   f(t)= &amp;amp; \frac{f({t}&#039;)}{t} \\ &lt;br /&gt;
  f(t)= &amp;amp; \frac{1}{t\cdot {{\sigma&#039; }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}(t)-{\mu }&#039;}{{{\sigma&#039; }}} \right)}^{2}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)\ge 0,t&amp;gt;0,-\infty &amp;lt;{\mu }&#039;&amp;lt;\infty ,{{\sigma&#039; }}&amp;gt;0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Lognormal Distribution Functions== &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM ANOTHER LOCATION IN THIS PAGE IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
{{:Lognormal Distribution Functions}}&lt;br /&gt;
&lt;br /&gt;
==Characteristics of the Lognormal Distribution ==&lt;br /&gt;
{{:Lognormal Distribution Characteristics}}&lt;br /&gt;
&lt;br /&gt;
==Estimation of the Parameters==&lt;br /&gt;
===Probability Plotting===&lt;br /&gt;
As described before, probability plotting involves plotting the failure times and associated unreliability estimates on specially constructed probability plotting paper. The form of this paper is based on a linearization of the &#039;&#039;cdf&#039;&#039; of the specific distribution. For the lognormal distribution, the cumulative density function can be written as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F({t}&#039;)=\Phi \left( \frac{{t}&#039;-{\mu }&#039;}{{{\sigma&#039;}}} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\Phi }^{-1}}\left[ F({t}&#039;) \right]=-\frac{{{\mu }&#039;}}{{{\sigma}&#039;}}+\frac{1}{{{\sigma }&#039;}}\cdot {t}&#039;\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y={{\Phi }^{-1}}\left[ F({t}&#039;) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;a=-\frac{{{\mu }&#039;}}{{{\sigma}&#039;}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=\frac{1}{{{\sigma}&#039;}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which results in the linear equation of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y=a+b{t}&#039;&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The normal probability paper resulting from this linearized &#039;&#039;cdf&#039;&#039; function is shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS.10 lognormal probability plot.png|center|350px| ]] &lt;br /&gt;
 &lt;br /&gt;
The process for reading the parameter estimate values from the lognormal probability plot is very similar to the method employed for the normal distribution (see [[The Normal Distribution]]). However, since the lognormal distribution models the natural logarithms of the times-to-failure, the values of the parameter estimates must be read and calculated based on a logarithmic scale, as opposed to the linear time scale as it was done with the normal distribution. This parameter scale appears at the top of the lognormal probability plot.&lt;br /&gt;
&lt;br /&gt;
The process of lognormal probability plotting is illustrated in the following example.&lt;br /&gt;
&lt;br /&gt;
====Plotting Example====&lt;br /&gt;
{{:Example: Lognormal Distribution Probability Plot}}&lt;br /&gt;
&lt;br /&gt;
===Rank Regression on Y=== &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM ANOTHER LOCATION IN THIS PAGE. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
Performing a rank regression on Y requires that a straight line be fitted to a set of data points such that the sum of the squares of the vertical deviations from the points to the line is minimized.&lt;br /&gt;
&lt;br /&gt;
The least squares parameter estimation method, or regression analysis, was discussed in [[Parameter Estimation]] and the following equations for regression on Y were derived, and are again applicable:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\bar{y}-\hat{b}\bar{x}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In our case the equations for &amp;lt;math&amp;gt;{{y}_{i}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x_{i}\,\!&amp;lt;/math&amp;gt; are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{y}_{i}}={{\Phi }^{-1}}\left[ F(t_{i}^{\prime }) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{x}_{i}}=t_{i}^{\prime }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;math&amp;gt;F(t_{i}^{\prime })\,\!&amp;lt;/math&amp;gt; is estimated from the median ranks. Once &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; are obtained, then &amp;lt;math&amp;gt;\widehat{\sigma }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{\mu }\,\!&amp;lt;/math&amp;gt; can easily be obtained from the above equations.&lt;br /&gt;
&lt;br /&gt;
{{The Correlation Coefficient Calculation}}&lt;br /&gt;
&lt;br /&gt;
====RRY Example====&lt;br /&gt;
&#039;&#039;&#039;Lognormal Distribution RRY Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
14 units were reliability tested and the following life test data were obtained:&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Life Test Data&lt;br /&gt;
|- &lt;br /&gt;
!Data point index&lt;br /&gt;
!Time-to-failure&lt;br /&gt;
|- &lt;br /&gt;
|1 ||5&lt;br /&gt;
|- &lt;br /&gt;
|2 ||10&lt;br /&gt;
|- &lt;br /&gt;
|3 ||15&lt;br /&gt;
|- &lt;br /&gt;
|4 ||20&lt;br /&gt;
|- &lt;br /&gt;
|5 ||25&lt;br /&gt;
|- &lt;br /&gt;
|6 ||30&lt;br /&gt;
|- &lt;br /&gt;
|7 ||35&lt;br /&gt;
|- &lt;br /&gt;
|8 ||40&lt;br /&gt;
|- &lt;br /&gt;
|9 ||50&lt;br /&gt;
|- &lt;br /&gt;
|10 ||60&lt;br /&gt;
|- &lt;br /&gt;
|11 ||70&lt;br /&gt;
|- &lt;br /&gt;
|12 ||80&lt;br /&gt;
|- &lt;br /&gt;
|13 ||90&lt;br /&gt;
|- &lt;br /&gt;
|14 ||100&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
Assuming the data follow a lognormal distribution, estimate the parameters and the correlation coefficient, &amp;lt;math&amp;gt;\rho \,\!&amp;lt;/math&amp;gt;, using rank regression on Y.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Construct a table like the one shown next.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\overset{{}}{\mathop{\text{Least Squares Analysis}}}\,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   N &amp;amp; t_{i} &amp;amp; F(t_{i}) &amp;amp; {t_{i}}&#039;&amp;amp; y_{i} &amp;amp; {{t_{i}}&#039;}^{2} &amp;amp; y_{i}^{2} &amp;amp; t_{i} y_{i}  \\&lt;br /&gt;
   \text{1} &amp;amp; \text{5} &amp;amp; \text{0}\text{.0483} &amp;amp; \text{1}\text{.6094}&amp;amp; \text{-1}\text{.6619} &amp;amp; \text{2}\text{.5903} &amp;amp; \text{2}\text{.7619} &amp;amp; \text{-2}\text{.6747}  \\&lt;br /&gt;
   \text{2} &amp;amp; \text{10} &amp;amp; \text{0}\text{.1170} &amp;amp; \text{2.3026}&amp;amp; \text{-1.1901} &amp;amp; \text{5.3019} &amp;amp; \text{1.4163} &amp;amp; \text{-2.7403}  \\&lt;br /&gt;
   \text{3} &amp;amp; \text{15} &amp;amp; \text{0}\text{.1865} &amp;amp; \text{2.7080}&amp;amp;\text{-0.8908} &amp;amp; \text{7.3335} &amp;amp; \text{0.7935} &amp;amp; \text{-2.4123}  \\&lt;br /&gt;
   \text{4} &amp;amp; \text{20} &amp;amp; \text{0}\text{.2561} &amp;amp; \text{2.9957} &amp;amp;\text{-0.6552} &amp;amp; \text{8.9744} &amp;amp; \text{0.4292} &amp;amp; \text{-1.9627}  \\&lt;br /&gt;
   \text{5} &amp;amp; \text{25} &amp;amp; \text{0}\text{.3258} &amp;amp; \text{3.2189}&amp;amp; \text{-0.4512} &amp;amp; \text{10.3612} &amp;amp; \text{0.2036} &amp;amp; \text{-1.4524}  \\&lt;br /&gt;
   \text{6} &amp;amp; \text{30} &amp;amp; \text{0}\text{.3954} &amp;amp; \text{3.4012}&amp;amp; \text{-0.2647} &amp;amp; \text{11.5681} &amp;amp; \text{0.0701} &amp;amp; \text{-0.9004}  \\&lt;br /&gt;
   \text{7} &amp;amp; \text{35} &amp;amp; \text{0}\text{.4651} &amp;amp; \text{3.5553} &amp;amp; \text{-0.0873} &amp;amp; \text{12.6405} &amp;amp; \text{-0.0076}&amp;amp; \text{-0.3102}  \\&lt;br /&gt;
   \text{8} &amp;amp; \text{40} &amp;amp; \text{0}\text{.5349} &amp;amp; \text{3.6889}&amp;amp; \text{0.0873} &amp;amp; \text{13.6078} &amp;amp; \text{0.0076} &amp;amp; \text{0.3219}  \\&lt;br /&gt;
   \text{9} &amp;amp; \text{50} &amp;amp; \text{0}\text{.6046} &amp;amp; \text{3.9120} &amp;amp; \text{0.2647} &amp;amp; \text{15.3039} &amp;amp; \text{0.0701} &amp;amp;\text{1.0357}  \\&lt;br /&gt;
   \text{10} &amp;amp; \text{60} &amp;amp; \text{0}\text{.6742} &amp;amp; \text{4.0943} &amp;amp; \text{0.4512} &amp;amp; \text{16.7637} &amp;amp; \text{0.2036}&amp;amp;\text{1.8474}  \\&lt;br /&gt;
   \text{11} &amp;amp; \text{70} &amp;amp; \text{0}\text{.7439} &amp;amp; \text{4.2485} &amp;amp; \text{0.6552} &amp;amp; \text{18.0497}&amp;amp; \text{0.4292} &amp;amp; \text{2.7834} \\&lt;br /&gt;
   \text{12} &amp;amp; \text{80} &amp;amp; \text{0}\text{.8135} &amp;amp; \text{4.3820} &amp;amp; \text{0.8908} &amp;amp; \text{19.2022} &amp;amp; \text{0.7935} &amp;amp; \text{3.9035}  \\&lt;br /&gt;
   \text{13} &amp;amp; \text{90} &amp;amp; \text{0}\text{.8830} &amp;amp; \text{4.4998} &amp;amp; \text{1.1901} &amp;amp; \text{20.2483}&amp;amp;\text{1.4163} &amp;amp; \text{5.3552}  \\&lt;br /&gt;
    \text{14} &amp;amp; \text{100}&amp;amp; \text{0}\text{.9517} &amp;amp; \text{4.6052} &amp;amp; \text{1.6619} &amp;amp; \text{21.2076} &amp;amp;\text{2.7619} &amp;amp; \text{7.6533}  \\&lt;br /&gt;
   \sum_{}^{} &amp;amp; \text{ } &amp;amp; \text{ } &amp;amp; \text{49.222} &amp;amp; \text{0} &amp;amp; \text{183.1531} &amp;amp; \text{11.3646} &amp;amp; \text{10.4473}  \\&lt;br /&gt;
&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The median rank values ( &amp;lt;math&amp;gt;F({{t}_{i}})\,\!&amp;lt;/math&amp;gt; ) can be found in rank tables or by using the Quick Statistical Reference in Weibull++ .&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;{{y}_{i}}\,\!&amp;lt;/math&amp;gt; values were obtained from the standardized normal distribution&#039;s area tables by entering for &amp;lt;math&amp;gt;F(z)\,\!&amp;lt;/math&amp;gt; and getting the corresponding &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; value ( &amp;lt;math&amp;gt;{{y}_{i}}\,\!&amp;lt;/math&amp;gt; ).&lt;br /&gt;
&lt;br /&gt;
Given the values in the table above, calculate &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \widehat{b}= &amp;amp; \frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }{{y}_{i}}-(\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime })(\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}})/14}{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime 2}-{{(\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime })}^{2}}/14} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \widehat{b}= &amp;amp; \frac{10.4473-(49.2220)(0)/14}{183.1530-{{(49.2220)}^{2}}/14}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=1.0349\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\widehat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,t_{i}^{\prime }}{N}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{a}=\frac{0}{14}-(1.0349)\frac{49.2220}{14}=-3.6386\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{\sigma&#039;}=\frac{1}{\widehat{b}}=\frac{1}{1.0349}=0.9663\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{\mu }&#039;=-\widehat{a}\cdot {\sigma&#039;}=-(-3.6386)\cdot 0.9663\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{\mu }&#039;=3.516 &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The mean and the standard deviation of the lognormal distribution are obtained using equations in the [[The_Lognormal_Distribution#Lognormal_Distribution_Functions|Lognormal Distribution Functions]] section above: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=\mu ={{e}^{3.516+\tfrac{1}{2}{{0.9663}^{2}}}}=53.6707\text{ hours}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{\sigma}=\sqrt{({{e}^{2\cdot 3.516+{{0.9663}^{2}}}})({{e}^{{{0.9663}^{2}}}}-1)}=66.69\text{ hours}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The correlation coefficient can be estimated as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{\rho }=0.9754\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above example can be repeated using Weibull++ , using RRY.&lt;br /&gt;
&lt;br /&gt;
[[Image:Lognormal Distribution Example 2 Data and Result.png|center|650px| ]] &lt;br /&gt;
&lt;br /&gt;
The mean can be obtained from the QCP and both the mean and the standard deviation can be obtained from the Function Wizard.&lt;br /&gt;
&lt;br /&gt;
===Rank Regression on X===&lt;br /&gt;
Performing a rank regression on X requires that a straight line be fitted to a set of data points such that the sum of the squares of the horizontal deviations from the points to the line is minimized.&lt;br /&gt;
&lt;br /&gt;
Again, the first task is to bring our &#039;&#039;cdf&#039;&#039; function into a linear form. This step is exactly the same as in regression on Y analysis and all the equations apply in this case too. The deviation from the previous analysis begins on the least squares fit part, where in this case we treat &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; as the dependent variable and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; as the independent variable. The best-fitting straight line to the data, for regression on X (see [[Parameter Estimation]]), is the straight line:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding equations for &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt;  and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{y}_{i}}={{\Phi }^{-1}}\left[ F(t_{i}^{\prime }) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{x}_{i}}=t_{i}^{\prime }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the &amp;lt;math&amp;gt;F(t_{i}^{\prime })\,\!&amp;lt;/math&amp;gt; is estimated from the median ranks. Once &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; are obtained, solve the linear equation for the unknown &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt;, which corresponds to: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=-\frac{\widehat{a}}{\widehat{b}}+\frac{1}{\widehat{b}}x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for the parameters we get: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;a=-\frac{\widehat{a}}{\widehat{b}}=-\frac{{{\mu }&#039;}}{\sigma&#039;}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=\frac{1}{\widehat{b}}=\frac{1}{\sigma&#039;}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The correlation coefficient is evaluated as before using equation in the [[The_Lognormal_Distribution#Rank_Regression_on_Y|previous section]].&lt;br /&gt;
&lt;br /&gt;
====RRX Example====&lt;br /&gt;
&#039;&#039;&#039;Lognormal Distribution RRX Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Using the same data set from the [[The_Lognormal_Distribution#RRY_Example|RRY example]] given above, and assuming a lognormal distribution, estimate the parameters and estimate the correlation coefficient, &amp;lt;math&amp;gt;\rho \,\!&amp;lt;/math&amp;gt;, using rank regression on X.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The table constructed for the RRY example also applies to this example as well. Using the values in this table we get: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \hat{b}= &amp;amp; \frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }{{y}_{i}}-\tfrac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}}{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{14}} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \widehat{b}= &amp;amp; \frac{10.4473-(49.2220)(0)/14}{11.3646-{{(0)}^{2}}/14}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{b}=0.9193\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }}{14}-\widehat{b}\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{a}=\frac{49.2220}{14}-(0.9193)\frac{(0)}{14}=3.5159\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{\sigma&#039;}=\widehat{b}=0.9193\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{\mu }&#039;=\frac{\widehat{a}}{\widehat{b}}{\sigma&#039;}=\frac{3.5159}{0.9193}\cdot 0.9193=3.5159\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using for Mean and Standard Deviation we get: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=\mu =51.3393\text{ hours}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{\sigma&#039;}=59.1682\text{ hours}.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The correlation coefficient is found using the equation in [[The Correlation Coefficient Calculation|previous section]]: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{\rho }=0.9754.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the regression on Y analysis is not necessarily the same as the regression on X. The only time when the results of the two regression types are the same (i.e., will yield the same equation for a line) is when the data lie perfectly on a line.&lt;br /&gt;
&lt;br /&gt;
Using Weibull++ , with the Rank Regression on X option, the results are:&lt;br /&gt;
&lt;br /&gt;
[[Image:Lognormal Distribution Example 3 Data and Result.png|center|650px| ]]&lt;br /&gt;
&lt;br /&gt;
===Maximum Likelihood Estimation===&lt;br /&gt;
As it was outlined in [[Parameter Estimation]], maximum likelihood estimation works by developing a likelihood function based on the available data and finding the values of the parameter estimates that maximize the likelihood function. This can be achieved by using iterative methods to determine the parameter estimate values that maximize the likelihood function. However, this can be rather difficult and time-consuming, particularly when dealing with the three-parameter distribution.  Another method of finding the parameter estimates involves taking the partial derivatives of the likelihood equation with respect to the parameters, setting the resulting equations equal to zero, and solving simultaneously to determine the values of the parameter estimates. The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the lognormal distribution are covered in [[Appendix:_Log-Likelihood_Equations|Appendix D]]&lt;br /&gt;
.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note About Bias&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
See the discussion regarding bias with the [[The Normal Distribution|normal distribution]] for information regarding parameter bias in the lognormal distribution.&lt;br /&gt;
&lt;br /&gt;
====MLE Example====&lt;br /&gt;
&#039;&#039;&#039;Lognormal Distribution MLE Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Using the same data set from the [[The_Lognormal_Distribution#RRY_Example|RRY and RRX examples]] given above and assuming a lognormal distribution, estimate the parameters using the MLE method.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
In this example we have only complete data. Thus, the partials reduce to: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial {\mu }&#039;}= &amp;amp; \frac{1}{\sigma&#039;^{2}}\cdot \underset{i=1}{\overset{14}{\mathop \sum }}\,\ln ({{t}_{i}})-{\mu }&#039;=0 \\ &lt;br /&gt;
 &amp;amp; \frac{\partial \Lambda }{\partial {{\sigma&#039;}}}= &amp;amp; \underset{i=1}{\overset{14}{\mathop \sum }}\,\left( \frac{\ln ({{t}_{i}})-{\mu }&#039;}{\sigma&#039;^{3}}-\frac{1}{{{\sigma&#039;}}} \right)=0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting the values of &amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; and solving the above system simultaneously, we get: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{{\hat{\sigma&#039; }}}}= &amp;amp; 0.849 \\ &lt;br /&gt;
 &amp;amp; {{{\hat{\mu }}}^{\prime }}= &amp;amp; 3.516  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using the equation for mean and standard deviation in the [[The_Lognormal_Distribution#Lognormal_Distribution_Functions|Lognormal Distribution Functions]] section above, we get: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=\hat{\mu }=48.25\text{ hours}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\hat{\sigma }}}=49.61\text{ hours}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variance/covariance matrix is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   \widehat{Var}\left( {{{\hat{\mu }}}^{\prime }} \right)=0.0515 &amp;amp; {} &amp;amp; \widehat{Cov}\left( {{{\hat{\mu }}}^{\prime }},{{{\hat{\sigma&#039;}}}} \right)=0.0000  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   \widehat{Cov}\left( {{{\hat{\mu }}}^{\prime }},{{{\hat{\sigma&#039; }}}} \right)=0.0000 &amp;amp; {} &amp;amp; \widehat{Var}\left( {{{\hat{\sigma&#039; }}}} \right)=0.0258  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Confidence Bounds==&lt;br /&gt;
The method used by the application in estimating the different types of confidence bounds for lognormally distributed data is presented in this section. Note that there are closed-form solutions for both the normal and lognormal reliability that can be obtained without the use of the Fisher information matrix. However, these closed-form solutions only apply to complete data. To achieve consistent application across all possible data types, Weibull++ always uses the Fisher matrix in computing confidence intervals. The complete derivations were presented in detail for a general function in [[Confidence Bounds]]. For a discussion on exact confidence bounds for the normal and lognormal, see [[The Normal Distribution]].&lt;br /&gt;
&lt;br /&gt;
===Fisher Matrix Bounds===&lt;br /&gt;
====Bounds on the Parameters====&lt;br /&gt;
The lower and upper bounds on the mean, &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt;, are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \mu _{U}^{\prime }= &amp;amp; {{\widehat{\mu }}^{\prime }}+{{K}_{\alpha }}\sqrt{Var({{\widehat{\mu }}^{\prime }})}\text{ (upper bound),} \\ &lt;br /&gt;
 &amp;amp; \mu _{L}^{\prime }= &amp;amp; {{\widehat{\mu }}^{\prime }}-{{K}_{\alpha }}\sqrt{Var({{\widehat{\mu }}^{\prime }})}\text{ (lower bound)}\text{.}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the standard deviation, &amp;lt;math&amp;gt;{\widehat{\sigma}&#039;}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\ln ({{\widehat{\sigma&#039;}}})\,\!&amp;lt;/math&amp;gt; is treated as normally distributed, and the bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma}_{U}}= &amp;amp; {{\widehat{\sigma&#039;}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma&#039;}}})}}{{{\widehat{\sigma&#039;}}}}}}\text{ (upper bound),} \\ &lt;br /&gt;
 &amp;amp; {{\sigma }_{L}}= &amp;amp; \frac{{{\widehat{\sigma&#039;}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma&#039; }}})}}{{{\widehat{\sigma&#039;}}}}}}}\text{ (lower bound),}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level, then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds.&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;{{\widehat{\mu }}^{\prime }}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\widehat{\sigma&#039;}}}\,\!&amp;lt;/math&amp;gt; are estimated as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left( \begin{matrix}&lt;br /&gt;
   \widehat{Var}\left( {{\widehat{\mu }}^{\prime }} \right) &amp;amp; \widehat{Cov}\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma&#039;}}} \right)  \\&lt;br /&gt;
   \widehat{Cov}\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma&#039;}}} \right) &amp;amp; \widehat{Var}\left( {{\widehat{\sigma&#039;}}} \right)  \\&lt;br /&gt;
\end{matrix} \right)=\left( \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{({\mu }&#039;)}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {\mu }&#039;\partial {{\sigma&#039;}}}  \\&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {\mu }&#039;\partial {{\sigma&#039;}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma&#039;^{2}}  \\&lt;br /&gt;
\end{matrix} \right)_{{\mu }&#039;={{\widehat{\mu }}^{\prime }},{{\sigma&#039;}}={{\widehat{\sigma&#039;}}}}^{-1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Lambda \,\!&amp;lt;/math&amp;gt; is the log-likelihood function of the lognormal distribution.&lt;br /&gt;
&lt;br /&gt;
====Bounds on Time(Type 1)====&lt;br /&gt;
The bounds around time for a given lognormal percentile, or unreliability, are estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{t}&#039;({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma&#039; }}})={{\widehat{\mu }}^{\prime }}+z\cdot {{\widehat{\sigma&#039; }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z={{\Phi }^{-1}}\left[ F({t}&#039;) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({t}&#039;)}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to calculate the variance of &amp;lt;math&amp;gt;{T}&#039;({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma }}}):\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var({{{\hat{t}}}^{\prime }})= &amp;amp; {{\left( \frac{\partial {t}&#039;}{\partial {\mu }&#039;} \right)}^{2}}Var({{\widehat{\mu }}^{\prime }})+{{\left( \frac{\partial {t}&#039;}{\partial {{\sigma&#039; }}} \right)}^{2}}Var({{\widehat{\sigma&#039; }}}) \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; +2\left( \frac{\partial {t}&#039;}{\partial {\mu }&#039;} \right)\left( \frac{\partial {t}&#039;}{\partial {{\sigma&#039; }}} \right)Cov\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma&#039; }}} \right) \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; Var({{{\hat{t}}}^{\prime }})= &amp;amp; Var({{\widehat{\mu }}^{\prime }})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma&#039; }}})+2\cdot \widehat{z}\cdot Cov\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma&#039; }}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds are then found by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; t_{U}^{\prime }= &amp;amp; \ln {{t}_{U}}={{{\hat{t}}}^{\prime }}+{{K}_{\alpha }}\sqrt{Var({{{\hat{t}}}^{\prime }})} \\ &lt;br /&gt;
 &amp;amp; t_{L}^{\prime }= &amp;amp; \ln {{t}_{L}}={{{\hat{t}}}^{\prime }}-{{K}_{\alpha }}\sqrt{Var({{{\hat{t}}}^{\prime }})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for &amp;lt;math&amp;gt;{{t}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{t}_{L}}\,\!&amp;lt;/math&amp;gt; we get: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{t}_{U}}= &amp;amp; {{e}^{t_{U}^{\prime }}}\text{ (upper bound),} \\ &lt;br /&gt;
 &amp;amp; {{t}_{L}}= &amp;amp; {{e}^{t_{L}^{\prime }}}\text{ (lower bound)}\text{.}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Bounds on Reliability (Type 2)====&lt;br /&gt;
The reliability of the lognormal distribution is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{R}(t;{{\hat{\mu }}^{&#039;}},{{\hat{\sigma }}^{&#039;}})=\int_{t&#039;}^{\infty }{\frac{1}{{{{\hat{\sigma }}}^{&#039;}}\sqrt{2\pi }}}{{e}^{-\frac{1}{2}{{\left( \frac{x-{{{\hat{\mu }}}^{&#039;}}}{{{{\hat{\sigma }}}^{&#039;}}} \right)}^{2}}}}dx\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;t&#039;=\ln (t)\,\!&amp;lt;/math&amp;gt;. Let &amp;lt;math&amp;gt;\hat{z}(x)=\frac{x-{{{\hat{\mu }}}^{&#039;}}}{{{\sigma }^{&#039;}}}\,\!&amp;lt;/math&amp;gt;, the above equation then becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{R}\left( \hat{z}(t&#039;) \right)=\int_{\hat{z}(t&#039;)}^{\infty }{\frac{1}{\sqrt{2\pi }}}{{e}^{-\frac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The bounds on &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{z}_{U}}= &amp;amp; \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ &lt;br /&gt;
 &amp;amp; {{z}_{L}}= &amp;amp; \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\hat{z})=\left( \frac{\partial {z}}{\partial \mu &#039;} \right)_{\hat{\mu }&#039;}^{2}Var\left( \hat{\mu }&#039; \right)+\left( \frac{\partial {z}}{\partial \sigma &#039;} \right)_{\hat{\sigma }&#039;}^{2}Var\left( \hat{\sigma }&#039; \right) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial{z}}{\partial \mu &#039;} \right)_{\hat{\mu }&#039;}^{{}}\left( \frac{\partial {z}}{\partial \sigma &#039;} \right)_{\hat{\sigma }&#039;}^{{}}Cov\left( \hat{\mu }&#039;,\hat{\sigma }&#039; \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var(\hat{z})=\frac{1}{{{{\hat{\sigma }}}^{&#039;2}}}\left[ Var\left( \hat{\mu }&#039; \right)+{{{\hat{z}}}^{2}}Var\left( \sigma &#039; \right)+2\cdot \hat{z}\cdot Cov\left( \hat{\mu }&#039;,\hat{\sigma }&#039; \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Likelihood Ratio Confidence Bounds===&lt;br /&gt;
====Bounds on Parameters====&lt;br /&gt;
As covered in [[Parameter Estimation]], the likelihood confidence bounds are calculated by finding values for &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\theta }_{2}}\,\!&amp;lt;/math&amp;gt; that satisfy:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;-2\cdot \text{ln}\left( \frac{L({{\theta }_{1}},{{\theta }_{2}})}{L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})} \right)=\chi _{\alpha ;1}^{2}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This equation can be rewritten as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}})=L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For complete data, the likelihood formula for the normal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({\mu }&#039;,{{\sigma&#039; }})=\underset{i=1}{\overset{N}{\mathop \prod }}\,f({{x}_{i}};{\mu }&#039;,{{\sigma&#039; }})=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\sigma&#039; }}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-{\mu }&#039;}{{{\sigma&#039;}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;math&amp;gt;{{x}_{i}}\,\!&amp;lt;/math&amp;gt; values represent the original time-to-failure data.  For a given value of &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;, values for &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma&#039; }}\,\!&amp;lt;/math&amp;gt; can be found which represent the maximum and minimum values that satisfy likelihood ratio equation. These represent the confidence bounds for the parameters at a confidence level &amp;lt;math&amp;gt;\delta ,\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\alpha =\delta \,\!&amp;lt;/math&amp;gt; for two-sided bounds and &amp;lt;math&amp;gt;\alpha =2\delta -1\,\!&amp;lt;/math&amp;gt; for one-sided.&lt;br /&gt;
=====Example: LR Bounds on Parameters=====&lt;br /&gt;
&#039;&#039;&#039;Lognormal Distribution Likelihood Ratio Bound Example (Parameters)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Five units are put on a reliability test and experience failures at 45, 60, 75, 90, and 115 hours. Assuming a lognormal distribution, the MLE parameter estimates are calculated to be &amp;lt;math&amp;gt;{{\widehat{\mu }}^{\prime }}=4.2926\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\widehat{\sigma&#039;}}}=0.32361.\,\!&amp;lt;/math&amp;gt; Calculate the two-sided 75% confidence bounds on these parameters using the likelihood ratio method.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The first step is to calculate the likelihood function for the parameter estimates: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma&#039; }}})= &amp;amp; \underset{i=1}{\overset{N}{\mathop \prod }}\,f({{x}_{i}};{{\widehat{\mu }}^{\prime }},{{\widehat{\sigma&#039; }}}), \\ &lt;br /&gt;
  = &amp;amp; \underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\widehat{\sigma&#039; }}}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-{{\widehat{\mu }}^{\prime }}}{{{\widehat{\sigma&#039; }}}} \right)}^{2}}}} \\ &lt;br /&gt;
  L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma&#039;}}})= &amp;amp; \underset{i=1}{\overset{5}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot 0.32361\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-4.2926}{0.32361} \right)}^{2}}}} \\ &lt;br /&gt;
  L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma&#039;}}})= &amp;amp; 1.115256\times {{10}^{-10}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{x}_{i}}\,\!&amp;lt;/math&amp;gt; are the original time-to-failure data points. We can now rearrange the likelihod ratio equation to the form: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({\mu }&#039;,{{\sigma&#039; }})-L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma&#039; }}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since our specified confidence level, &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt;, is 75%, we can calculate the value of the chi-squared statistic, &amp;lt;math&amp;gt;\chi _{0.75;1}^{2}=1.323303.\,\!&amp;lt;/math&amp;gt; We can now substitute this information into the equation: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; L({\mu }&#039;,{{\sigma&#039; }})-L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma&#039; }}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; L({\mu }&#039;,{{\sigma&#039;}})-1.115256\times {{10}^{-10}}\cdot {{e}^{\tfrac{-1.323303}{2}}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; L({\mu }&#039;,{{\sigma&#039;}})-5.754703\times {{10}^{-11}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It now remains to find the values of &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma&#039;}}\,\!&amp;lt;/math&amp;gt; which satisfy this equation. This is an iterative process that requires setting the value of &amp;lt;math&amp;gt;{{\sigma&#039;}}\,\!&amp;lt;/math&amp;gt; and finding the appropriate values of &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt;, and vice versa.&lt;br /&gt;
&lt;br /&gt;
The following table gives the values of &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; based on given values of &amp;lt;math&amp;gt;{{\sigma&#039;}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   {{\sigma&#039; }} &amp;amp; \mu _{1}^{\prime } &amp;amp; \mu _{2}^{\prime } &amp;amp; {{\sigma&#039; }} &amp;amp; \mu _{1}^{\prime } &amp;amp; \mu _{2}^{\prime }  \\&lt;br /&gt;
   0.24 &amp;amp; 4.2421 &amp;amp; 4.3432 &amp;amp; 0.37 &amp;amp; 4.1145 &amp;amp; 4.4708  \\&lt;br /&gt;
   0.25 &amp;amp; 4.2115 &amp;amp; 4.3738 &amp;amp; 0.38 &amp;amp; 4.1152 &amp;amp; 4.4701  \\&lt;br /&gt;
   0.26 &amp;amp; 4.1909 &amp;amp; 4.3944 &amp;amp; 0.39 &amp;amp; 4.1170 &amp;amp; 4.4683  \\&lt;br /&gt;
   0.27 &amp;amp; 4.1748 &amp;amp; 4.4105 &amp;amp; 0.40 &amp;amp; 4.1200 &amp;amp; 4.4653  \\&lt;br /&gt;
   0.28 &amp;amp; 4.1618 &amp;amp; 4.4235 &amp;amp; 0.41 &amp;amp; 4.1244 &amp;amp; 4.4609  \\&lt;br /&gt;
   0.29 &amp;amp; 4.1509 &amp;amp; 4.4344 &amp;amp; 0.42 &amp;amp; 4.1302 &amp;amp; 4.4551  \\&lt;br /&gt;
   0.30 &amp;amp; 4.1419 &amp;amp; 4.4434 &amp;amp; 0.43 &amp;amp; 4.1377 &amp;amp; 4.4476  \\&lt;br /&gt;
   0.31 &amp;amp; 4.1343 &amp;amp; 4.4510 &amp;amp; 0.44 &amp;amp; 4.1472 &amp;amp; 4.4381  \\&lt;br /&gt;
   0.32 &amp;amp; 4.1281 &amp;amp; 4.4572 &amp;amp; 0.45 &amp;amp; 4.1591 &amp;amp; 4.4262  \\&lt;br /&gt;
   0.33 &amp;amp; 4.1231 &amp;amp; 4.4622 &amp;amp; 0.46 &amp;amp; 4.1742 &amp;amp; 4.4111  \\&lt;br /&gt;
   0.34 &amp;amp; 4.1193 &amp;amp; 4.4660 &amp;amp; 0.47 &amp;amp; 4.1939 &amp;amp; 4.3914  \\&lt;br /&gt;
   0.35 &amp;amp; 4.1166 &amp;amp; 4.4687 &amp;amp; 0.48 &amp;amp; 4.2221 &amp;amp; 4.3632  \\&lt;br /&gt;
   0.36 &amp;amp; 4.1150 &amp;amp; 4.4703 &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These points are represented graphically in the following contour plot:&lt;br /&gt;
&lt;br /&gt;
[[Image:WB.10 lognormal contour plot.png|center|450px| ]] &lt;br /&gt;
&lt;br /&gt;
(Note that this plot is generated with degrees of freedom &amp;lt;math&amp;gt;k=1\,\!&amp;lt;/math&amp;gt;, as we are only determining bounds on one parameter. The contour plots generated in Weibull++ are done with degrees of freedom &amp;lt;math&amp;gt;k=2\,\!&amp;lt;/math&amp;gt;, for use in comparing both parameters simultaneously.) As can be determined from the table the lowest calculated value for &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; is 4.1145, while the highest is 4.4708. These represent the two-sided 75% confidence limits on this parameter. Since solutions for the equation do not exist for values of &amp;lt;math&amp;gt;{{\sigma&#039;&lt;br /&gt;
}}\,\!&amp;lt;/math&amp;gt; below 0.24 or above 0.48, these can be considered the two-sided 75% confidence limits for this parameter. In order to obtain more accurate values for the confidence limits on &amp;lt;math&amp;gt;{{\sigma&#039;}}\,\!&amp;lt;/math&amp;gt;, we can perform the same procedure as before, but finding the two values of &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt; that correspond with a given value of &amp;lt;math&amp;gt;{\mu }&#039;.\,\!&amp;lt;/math&amp;gt; Using this method, we find that the 75% confidence limits on &amp;lt;math&amp;gt;{{\sigma&#039;}}\,\!&amp;lt;/math&amp;gt; are 0.23405 and 0.48936, which are close to the initial estimates of 0.24 and 0.48.&lt;br /&gt;
&lt;br /&gt;
====Bounds on Time and Reliability====&lt;br /&gt;
In order to calculate the bounds on a time estimate for a given reliability, or on a reliability estimate for a given time, the likelihood function needs to be rewritten in terms of one parameter and time/reliability, so that the maximum and minimum values of the time can be observed as the parameter is varied. This can be accomplished by substituting a form of the normal reliability equation into the likelihood function. The normal reliability equation can be written as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R=1-\Phi \left( \frac{\text{ln}(t)-{\mu }&#039;}{{{\sigma&#039;}}} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be rearranged to the form: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{\mu }&#039;=\text{ln}(t)-{{\sigma&#039;}}\cdot {{\Phi }^{-1}}(1-R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\Phi }^{-1}}\,\!&amp;lt;/math&amp;gt; is the inverse standard normal. This equation can now be substituted into likelihood function to produce a likelihood equation in terms of &amp;lt;math&amp;gt;{{\sigma&#039;}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;:  &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\sigma&#039;}},t/R)=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\sigma&#039;}}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-\left( \text{ln}(t)-{{\sigma&#039;}}\cdot {{\Phi }^{-1}}(1-R) \right)}{{{\sigma&#039;}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The unknown variable &amp;lt;math&amp;gt;t/R\,\!&amp;lt;/math&amp;gt; depends on what type of bounds are being determined.  If one is trying to determine the bounds on time for a given reliability, then &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is a known constant and &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; is the unknown variable. Conversely, if one is trying to determine the bounds on reliability for a given time, then &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; is a known constant and &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the unknown variable. Either way, the above equation can be used to solve the likelihood ratio equation for the values of interest.&lt;br /&gt;
&lt;br /&gt;
=====Example: LR Bounds on Time=====&lt;br /&gt;
&#039;&#039;&#039;Lognormal Distribution Likelihood Ratio Bound Example (Time)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For the same data set given for the [[The_Lognormal_Distribution#Example:_LR_Bounds_on_Parameters|parameter bounds example]], determine the two-sided 75% confidence bounds on the time estimate for a reliability of 80%.  The ML estimate for the time at &amp;lt;math&amp;gt;R(t)=80%\,\!&amp;lt;/math&amp;gt; is 55.718.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this example, we are trying to determine the two-sided 75% confidence bounds on the time estimate of 55.718. This is accomplished by substituting &amp;lt;math&amp;gt;R=0.80\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha =0.75\,\!&amp;lt;/math&amp;gt; into the likelihood function, and varying &amp;lt;math&amp;gt;{{\sigma&#039; }}\,\!&amp;lt;/math&amp;gt; until the maximum and minimum values of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; are found. The following table gives the values of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; based on given values of &amp;lt;math&amp;gt;{{\sigma&#039; }}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   {{\sigma&#039; }} &amp;amp; {{t}_{1}} &amp;amp; {{t}_{2}} &amp;amp; {{\sigma&#039; }} &amp;amp; {{t}_{1}} &amp;amp; {{t}_{2}}  \\&lt;br /&gt;
   0.24 &amp;amp; 56.832 &amp;amp; 62.879 &amp;amp; 0.37 &amp;amp; 44.841 &amp;amp; 64.031  \\&lt;br /&gt;
   0.25 &amp;amp; 54.660 &amp;amp; 64.287 &amp;amp; 0.38 &amp;amp; 44.494 &amp;amp; 63.454  \\&lt;br /&gt;
   0.26 &amp;amp; 53.093 &amp;amp; 65.079 &amp;amp; 0.39 &amp;amp; 44.200 &amp;amp; 62.809  \\&lt;br /&gt;
   0.27 &amp;amp; 51.811 &amp;amp; 65.576 &amp;amp; 0.40 &amp;amp; 43.963 &amp;amp; 62.093  \\&lt;br /&gt;
   0.28 &amp;amp; 50.711 &amp;amp; 65.881 &amp;amp; 0.41 &amp;amp; 43.786 &amp;amp; 61.304  \\&lt;br /&gt;
   0.29 &amp;amp; 49.743 &amp;amp; 66.041 &amp;amp; 0.42 &amp;amp; 43.674 &amp;amp; 60.436  \\&lt;br /&gt;
   0.30 &amp;amp; 48.881 &amp;amp; 66.085 &amp;amp; 0.43 &amp;amp; 43.634 &amp;amp; 59.481  \\&lt;br /&gt;
   0.31 &amp;amp; 48.106 &amp;amp; 66.028 &amp;amp; 0.44 &amp;amp; 43.681 &amp;amp; 58.426  \\&lt;br /&gt;
   0.32 &amp;amp; 47.408 &amp;amp; 65.883 &amp;amp; 0.45 &amp;amp; 43.832 &amp;amp; 57.252  \\&lt;br /&gt;
   0.33 &amp;amp; 46.777 &amp;amp; 65.657 &amp;amp; 0.46 &amp;amp; 44.124 &amp;amp; 55.924  \\&lt;br /&gt;
   0.34 &amp;amp; 46.208 &amp;amp; 65.355 &amp;amp; 0.47 &amp;amp; 44.625 &amp;amp; 54.373  \\&lt;br /&gt;
   0.35 &amp;amp; 45.697 &amp;amp; 64.983 &amp;amp; 0.48 &amp;amp; 45.517 &amp;amp; 52.418  \\&lt;br /&gt;
   0.36 &amp;amp; 45.242 &amp;amp; 64.541 &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This data set is represented graphically in the following contour plot:&lt;br /&gt;
&lt;br /&gt;
[[Image:WB.10 time vs sigma.png|center|450px| ]] &lt;br /&gt;
&lt;br /&gt;
As can be determined from the table, the lowest calculated value for &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; is 43.634, while the highest is 66.085. These represent the two-sided 75% confidence limits on the time at which reliability is equal to 80%.&lt;br /&gt;
&lt;br /&gt;
=====Example: LR Bounds on Reliability=====&lt;br /&gt;
&#039;&#039;&#039;Lognormal Distribution Likelihood Ratio Bound Example (Reliability)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For the same data set given above for the [[The_Lognormal_Distribution#Example:_LR_Bounds_on_Parameters|parameter bounds example]], determine the two-sided 75% confidence bounds on the reliability estimate for &amp;lt;math&amp;gt;t=65\,\!&amp;lt;/math&amp;gt;.  The ML estimate for the reliability at &amp;lt;math&amp;gt;t=65\,\!&amp;lt;/math&amp;gt; is 64.261%.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this example, we are trying to determine the two-sided 75% confidence bounds on the reliability estimate of 64.261%. This is accomplished by substituting &amp;lt;math&amp;gt;t=65\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha =0.75\,\!&amp;lt;/math&amp;gt; into the likelihood function, and varying &amp;lt;math&amp;gt;{{\sigma&#039;}}\,\!&amp;lt;/math&amp;gt; until the maximum and minimum values of &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; are found. The following table gives the values of &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; based on given values of &amp;lt;math&amp;gt;{{\sigma&#039; }}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   {{\sigma&#039;}} &amp;amp; {{R}_{1}} &amp;amp; {{R}_{2}} &amp;amp; {{\sigma&#039;}} &amp;amp; {{R}_{1}} &amp;amp; {{R}_{2}}  \\&lt;br /&gt;
   0.24 &amp;amp; 61.107% &amp;amp; 75.910% &amp;amp; 0.37 &amp;amp; 43.573% &amp;amp; 78.845%  \\&lt;br /&gt;
   0.25 &amp;amp; 55.906% &amp;amp; 78.742% &amp;amp; 0.38 &amp;amp; 43.807% &amp;amp; 78.180%  \\&lt;br /&gt;
   0.26 &amp;amp; 55.528% &amp;amp; 80.131% &amp;amp; 0.39 &amp;amp; 44.147% &amp;amp; 77.448%  \\&lt;br /&gt;
   0.27 &amp;amp; 50.067% &amp;amp; 80.903% &amp;amp; 0.40 &amp;amp; 44.593% &amp;amp; 76.646%  \\&lt;br /&gt;
   0.28 &amp;amp; 48.206% &amp;amp; 81.319% &amp;amp; 0.41 &amp;amp; 45.146% &amp;amp; 75.767%  \\&lt;br /&gt;
   0.29 &amp;amp; 46.779% &amp;amp; 81.499% &amp;amp; 0.42 &amp;amp; 45.813% &amp;amp; 74.802%  \\&lt;br /&gt;
   0.30 &amp;amp; 45.685% &amp;amp; 81.508% &amp;amp; 0.43 &amp;amp; 46.604% &amp;amp; 73.737%  \\&lt;br /&gt;
   0.31 &amp;amp; 44.857% &amp;amp; 81.387% &amp;amp; 0.44 &amp;amp; 47.538% &amp;amp; 72.551%  \\&lt;br /&gt;
   0.32 &amp;amp; 44.250% &amp;amp; 81.159% &amp;amp; 0.45 &amp;amp; 48.645% &amp;amp; 71.212%  \\&lt;br /&gt;
   0.33 &amp;amp; 43.827% &amp;amp; 80.842% &amp;amp; 0.46 &amp;amp; 49.980% &amp;amp; 69.661%  \\&lt;br /&gt;
   0.34 &amp;amp; 43.565% &amp;amp; 80.446% &amp;amp; 0.47 &amp;amp; 51.652% &amp;amp; 67.789%  \\&lt;br /&gt;
   0.35 &amp;amp; 43.444% &amp;amp; 79.979% &amp;amp; 0.48 &amp;amp; 53.956% &amp;amp; 65.299%  \\&lt;br /&gt;
   0.36 &amp;amp; 43.450% &amp;amp; 79.444% &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This data set is represented graphically in the following contour plot:&lt;br /&gt;
&lt;br /&gt;
[[Image:WB.10 reliability v sigma.png|center|450px| ]] &lt;br /&gt;
&lt;br /&gt;
As can be determined from the table, the lowest calculated value for &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is 43.444%, while the highest is 81.508%. These represent the two-sided 75% confidence limits on the reliability at &amp;lt;math&amp;gt;t=65\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Bayesian Confidence Bounds===&lt;br /&gt;
====Bounds on Parameters====&lt;br /&gt;
From [[Parameter Estimation]], we know that the marginal distribution of parameter &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; is: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   f({\mu }&#039;|Data)= &amp;amp; \int_{0}^{\infty }f({\mu }&#039;,{{\sigma&#039;}}|Data)d{{\sigma&#039;}} \\ &lt;br /&gt;
  = &amp;amp; \frac{\int_{0}^{\infty }L(Data|{\mu }&#039;,{{\sigma&#039;}})\varphi ({\mu }&#039;)\varphi ({{\sigma&#039;}})d{{\sigma&#039;}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|{\mu }&#039;,{{\sigma&#039;}})\varphi ({\mu }&#039;)\varphi ({{\sigma&#039;}})d{\mu }&#039;d{{\sigma&#039;}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
::&amp;lt;math&amp;gt;\varphi ({{\sigma &#039;}})\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\tfrac{1}{{{\sigma &#039;}}}\,\!&amp;lt;/math&amp;gt;, non-informative prior of &amp;lt;math&amp;gt;{{\sigma &#039;}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi ({\mu }&#039;)\,\!&amp;lt;/math&amp;gt; is an uniform distribution from - &amp;lt;math&amp;gt;\infty \,\!&amp;lt;/math&amp;gt; to + &amp;lt;math&amp;gt;\infty \,\!&amp;lt;/math&amp;gt;, non-informative prior of &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
With the above prior distributions, &amp;lt;math&amp;gt;f({\mu }&#039;|Data)\,\!&amp;lt;/math&amp;gt; can be rewritten as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f({\mu }&#039;|Data)=\frac{\int_{0}^{\infty }L(Data|{\mu }&#039;,{{\sigma &#039;}})\tfrac{1}{{{\sigma &#039;}}}d{{\sigma &#039;}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|{\mu }&#039;,{{\sigma &#039;}})\tfrac{1}{{{\sigma &#039;}}}d{\mu }&#039;d{{\sigma &#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The one-sided upper bound of  &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=P({\mu }&#039;\le \mu _{U}^{\prime })=\int_{-\infty }^{\mu _{U}^{\prime }}f({\mu }&#039;|Data)d{\mu }&#039;\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The one-sided lower bound of &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;1-CL=P({\mu }&#039;\le \mu _{L}^{\prime })=\int_{-\infty }^{\mu _{L}^{\prime }}f({\mu }&#039;|Data)d{\mu }&#039;\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The two-sided bounds of &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=P(\mu _{L}^{\prime }\le {\mu }&#039;\le \mu _{U}^{\prime })=\int_{\mu _{L}^{\prime }}^{\mu _{U}^{\prime }}f({\mu }&#039;|Data)d{\mu }&#039;\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same method can be used to obtained the bounds of &amp;lt;math&amp;gt;{{\sigma &#039;}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Bounds on Time (Type 1)====&lt;br /&gt;
The reliable life of the lognormal distribution is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\ln T={\mu }&#039;+{{\sigma &#039;}}{{\Phi }^{-1}}(1-R)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The one-sided upper on time bound is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\ln t\le \ln {{t}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }&#039;+{{\sigma &#039;}}{{\Phi }^{-1}}(1-R)\le \ln {{t}_{U}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above equation can be rewritten in terms of &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }&#039;\le \ln {{t}_{U}}-{{\sigma &#039;}}{{\Phi }^{-1}}(1-R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the posterior distribution of &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{\ln {{t}_{U}}-{{\sigma ‘}}{{\Phi }^{-1}}(1-R)}L({{\sigma &#039;}},{\mu }&#039;)\tfrac{1}{{{\sigma &#039;}}}d{\mu }&#039;d{{\sigma &#039;}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L({{\sigma &#039;}},{\mu }&#039;)\tfrac{1}{{{\sigma &#039;}}}d{\mu }&#039;d{{\sigma &#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above equation is solved w.r.t. &amp;lt;math&amp;gt;{{t}_{U}}.\,\!&amp;lt;/math&amp;gt; The same method can be applied for one-sided lower bounds and two-sided bounds on Time.&lt;br /&gt;
&lt;br /&gt;
====Bounds on Reliability (Type 2)====&lt;br /&gt;
&lt;br /&gt;
The one-sided upper bound on reliability is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(R\le {{R}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }&#039;\le \ln t-{{\sigma &#039;}}{{\Phi }^{-1}}(1-{{R}_{U}}))\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the posterior distribution of &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{\ln t-{{\sigma &#039;}}{{\Phi }^{-1}}(1-{{R}_{U}})}L({{\sigma&#039;}},{\mu }&#039;)\tfrac{1}{{{\sigma&#039;}}}d{\mu }&#039;d{{\sigma &#039;}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L({{\sigma &#039;}},{\mu }&#039;)\tfrac{1}{{{\sigma &#039;}}}d{\mu }&#039;d{{\sigma &#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above equation is solved w.r.t. &amp;lt;math&amp;gt;{{R}_{U}}.\,\!&amp;lt;/math&amp;gt; The same method is used to calculate the one-sided lower bounds and two-sided bounds on Reliability.&lt;br /&gt;
&lt;br /&gt;
====Example: Bayesian Bounds====&lt;br /&gt;
&#039;&#039;&#039;Lognormal Distribution Bayesian Bound Example (Parameters)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Determine the two-sided 90% Bayesian confidence bounds on the lognormal parameter estimates for the data given next:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \text{Data Point Index} &amp;amp; \text{State End Time}  \\&lt;br /&gt;
   \text{1} &amp;amp; \text{2}  \\&lt;br /&gt;
   \text{2} &amp;amp; \text{5}  \\&lt;br /&gt;
   \text{3} &amp;amp; \text{11}  \\&lt;br /&gt;
   \text{4} &amp;amp; \text{23}  \\&lt;br /&gt;
   \text{5} &amp;amp; \text{29}  \\&lt;br /&gt;
   \text{6} &amp;amp; \text{37}  \\&lt;br /&gt;
   \text{7} &amp;amp; \text{43}  \\&lt;br /&gt;
   \text{8} &amp;amp; \text{59}  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The data points are entered into a times-to-failure data sheet. The lognormal distribution is selected under Distributions. The Bayesian confidence bounds method only applies for the MLE analysis method, therefore, Maximum Likelihood (MLE) is selected under Analysis Method and Use Bayesian is selected under the Confidence Bounds Method in the Analysis tab.&lt;br /&gt;
&lt;br /&gt;
The two-sided 90% Bayesian confidence bounds on the lognormal parameter are obtained using the QCP and clicking on the Calculate Bounds button in the Parameter Bounds tab as follows: &lt;br /&gt;
&lt;br /&gt;
[[Image:Lognormal Distribution Example 8 QCP.png|center|650px| ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lognormal Distribution Example 8 Parameter Bounds.png|center|500px| ]]&lt;br /&gt;
&lt;br /&gt;
==Lognormal Distribution Examples==&lt;br /&gt;
{{:Lognormal Distribution Examples}}&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=The_Normal_Distribution&amp;diff=65091</id>
		<title>The Normal Distribution</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=The_Normal_Distribution&amp;diff=65091"/>
		<updated>2017-06-29T19:46:43Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Probability Plotting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|9|The Normal (Gaussian) Distribution}}&lt;br /&gt;
The normal distribution, also known as the Gaussian distribution, is the most widely-used general purpose distribution. It is for this reason that it is included among the lifetime distributions commonly used for reliability and life data analysis. There are some who argue that the normal distribution is inappropriate for modeling lifetime data because the left-hand limit of the distribution extends to negative infinity. This could conceivably result in modeling negative times-to-failure. However, provided that the distribution in question has a relatively high mean and a relatively small standard deviation, the issue of negative failure times should not present itself as a problem. Nevertheless, the normal distribution has been shown to be useful for modeling the lifetimes of consumable items, such as printer toner cartridges.  &lt;br /&gt;
&lt;br /&gt;
==Normal Probability Density Function==&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the normal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{1}{\sigma \sqrt{2\pi }}{{e}^{-\frac{1}{2}{{\left( \frac{t-\mu }{\sigma } \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\mu\,\!&amp;lt;/math&amp;gt; = mean of the normal times-to-faiure, also noted as &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt; = standard deviation of the times-to-failure&lt;br /&gt;
&lt;br /&gt;
It is a 2-parameter distribution with parameters &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; (or &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; ) and &amp;lt;math&amp;gt;{{\sigma }}\,\!&amp;lt;/math&amp;gt; (i.e., the mean and the standard deviation, respectively).&lt;br /&gt;
&lt;br /&gt;
==Normal Statistical Properties==&lt;br /&gt;
===The Normal Mean, Median and Mode===&lt;br /&gt;
The normal mean or MTTF is actually one of the parameters of the distribution, usually denoted as &amp;lt;math&amp;gt;\mu .\,\!&amp;lt;/math&amp;gt; Because the normal distribution is symmetrical, the median and the mode are always equal to the mean:  &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\mu =\tilde{T}=\breve{T}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Normal Standard Deviation===&lt;br /&gt;
As with the mean, the standard deviation for the normal distribution is actually one of the parameters, usually denoted as &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===The Normal Reliability Function===&lt;br /&gt;
The reliability for a mission of time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; for the normal distribution is determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t)=\int_{t}^{\infty }f(x)dx=\int_{t}^{\infty }\frac{1}{{{\sigma }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{x-\mu }{{{\sigma }}} \right)}^{2}}}}dx\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no closed-form solution for the normal reliability function. Solutions can be obtained via the use of standard normal tables. Since the application automatically solves for the reliability, we will not discuss manual solution methods. For interested readers, full explanations can be found in the references.&lt;br /&gt;
&lt;br /&gt;
===The Normal Conditional Reliability Function===&lt;br /&gt;
The normal conditional reliability function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t|T)=\frac{R(T+t)}{R(T)}=\frac{\int_{T+t}^{\infty }\tfrac{1}{{{\sigma }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{x-\mu }{{{\sigma }}} \right)}^{2}}}}dx}{\int_{T}^{\infty }\tfrac{1}{{{\sigma }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{x-\mu }{{{\sigma }}} \right)}^{2}}}}dx}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again, the use of standard normal tables for the calculation of the normal conditional reliability is necessary, as there is no closed form solution.&lt;br /&gt;
&lt;br /&gt;
===The Normal Reliable Life===&lt;br /&gt;
Since there is no closed-form solution for the normal reliability function, there will also be no closed-form solution for the normal reliable life. To determine the normal reliable life, one must solve: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T)=\int_{T}^{\infty }\frac{1}{{{\sigma }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\mu }{{{\sigma }}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===The Normal Failure Rate Function===&lt;br /&gt;
The instantaneous normal failure rate is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t)=\frac{f(t)}{R(t)}=\frac{\tfrac{1}{{{\sigma }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\mu }{{{\sigma }}} \right)}^{2}}}}}{\int_{t}^{\infty }\tfrac{1}{{{\sigma }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{x-\mu }{{{\sigma }}} \right)}^{2}}}}dx}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Characteristics of the Normal Distribution==&lt;br /&gt;
Some of the specific characteristics of the normal distribution are the following:&lt;br /&gt;
:*The normal &#039;&#039;pdf&#039;&#039; has a mean, &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt;, which is equal to the median, &amp;lt;math&amp;gt;\breve{T}\,\!&amp;lt;/math&amp;gt;, and also equal to the mode, &amp;lt;math&amp;gt;\tilde{T}\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\bar{T}=\breve{T}=\tilde{T}\,\!&amp;lt;/math&amp;gt;. This is because the normal distribution is symmetrical about its mean.&lt;br /&gt;
&lt;br /&gt;
[[Image:WB.9 normalpdf.png|center|400px| ]]&lt;br /&gt;
&lt;br /&gt;
:*The mean, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;, or the mean life or the &amp;lt;math&amp;gt;MTTF\,\!&amp;lt;/math&amp;gt;, is also the location parameter of the normal &#039;&#039;pdf&#039;&#039;, as it locates the &#039;&#039;pdf&#039;&#039; along the abscissa. It can assume values of &amp;lt;math&amp;gt;-\infty &amp;lt;\bar{T}&amp;lt;\infty \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*The normal &#039;&#039;pdf&#039;&#039; has no shape parameter. This means that the normal &#039;&#039;pdf&#039;&#039; has only one shape, the bell shape, and this shape does not change.&lt;br /&gt;
&lt;br /&gt;
[[Image:WB.9 effect of sigma.png|center|400px| ]]&lt;br /&gt;
&lt;br /&gt;
:*The standard deviation, &amp;lt;math&amp;gt;{{\sigma }}\,\!&amp;lt;/math&amp;gt;, is the scale parameter of the normal &#039;&#039;pdf&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
:*As &amp;lt;math&amp;gt;{{\sigma }}\,\!&amp;lt;/math&amp;gt; decreases, the &#039;&#039;pdf&#039;&#039; gets pushed toward the mean, or it becomes narrower and taller.&lt;br /&gt;
&lt;br /&gt;
:*As &amp;lt;math&amp;gt;{{\sigma }}\,\!&amp;lt;/math&amp;gt; increases, the &#039;&#039;pdf&#039;&#039; spreads out away from the mean, or it becomes broader and shallower.&lt;br /&gt;
&lt;br /&gt;
:*The standard deviation can assume values of &amp;lt;math&amp;gt;0&amp;lt;{{\sigma }}&amp;lt;\infty \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
:*The greater the variability, the larger the value of &amp;lt;math&amp;gt;{{\sigma }}\,\!&amp;lt;/math&amp;gt;, and vice versa.&lt;br /&gt;
&lt;br /&gt;
:*The standard deviation is also the distance between the mean and the point of inflection of the &#039;&#039;pdf&#039;&#039;, on each side of the mean. The point of inflection is that point of the &#039;&#039;pdf&#039;&#039; where the slope changes its value from a decreasing to an increasing one, or where the second derivative of the &#039;&#039;pdf&#039;&#039; has a value of zero.&lt;br /&gt;
&lt;br /&gt;
:*The normal &#039;&#039;pdf&#039;&#039; starts at &amp;lt;math&amp;gt;t=-\infty \,\!&amp;lt;/math&amp;gt; with an &amp;lt;math&amp;gt;f(t)=0\,\!&amp;lt;/math&amp;gt;. As &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; increases, &amp;lt;math&amp;gt;f(t)\,\!&amp;lt;/math&amp;gt; also increases, goes through its point of inflection and reaches its maximum value at &amp;lt;math&amp;gt;t=\bar{T}\,\!&amp;lt;/math&amp;gt;. Thereafter, &amp;lt;math&amp;gt;f(t)\,\!&amp;lt;/math&amp;gt; decreases, goes through its point of inflection, and assumes a value of &amp;lt;math&amp;gt;f(t)=0\,\!&amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt;t=+\infty \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Weibull++ Notes on Negative Time Values&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
One of the disadvantages of using the normal distribution for reliability calculations is the fact that the normal distribution starts at negative infinity. This can result in negative values for some of the results. Negative values for time are not accepted in most of the components of Weibull++, nor are they implemented. Certain components of the application reserve negative values for suspensions, or will not return negative results. For example, the Quick Calculation Pad will return a null value (zero) if the result is negative. Only the Free-Form (Probit) data sheet can accept negative values for the random variable (x-axis values).&lt;br /&gt;
&lt;br /&gt;
==Estimation of the Parameters==&lt;br /&gt;
===Probability Plotting===&lt;br /&gt;
As described before, probability plotting involves plotting the failure times and associated unreliability estimates on specially constructed probability plotting paper. The form of this paper is based on a linearization of the &#039;&#039;cdf&#039;&#039; of the specific distribution. For the normal distribution, the cumulative density function can be written as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=\Phi \left( \frac{t-\mu }{{{\sigma }}} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\Phi }^{-1}}\left[ F(t) \right]=-\frac{\mu}{\sigma}+\frac{1}{\sigma}t\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y={{\Phi }^{-1}}\left[ F(t) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;a=-\frac{\mu }{\sigma }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=\frac{1}{\sigma }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which results in the linear equation of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
y=a+bt &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The normal probability paper resulting from this linearized &#039;&#039;cdf&#039;&#039; function is shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:WB.9normalPP.png|center|400px| ]]&lt;br /&gt;
&lt;br /&gt;
Since the normal distribution is symmetrical, the area under the &#039;&#039;pdf&#039;&#039; curve from &amp;lt;math&amp;gt;-\infty \,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;0.5\,\!&amp;lt;/math&amp;gt;, as is the area from &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;+\infty \,\!&amp;lt;/math&amp;gt;. Consequently, the value of &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; is said to be the point where &amp;lt;math&amp;gt;R(t)=Q(t)=50%\,\!&amp;lt;/math&amp;gt;.  This means that the estimate of &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; can be read from the point where the plotted line crosses the 50% unreliability line.&lt;br /&gt;
&lt;br /&gt;
To determine the value of &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt; from the probability plot, it is first necessary to understand that the area under the &#039;&#039;pdf&#039;&#039; curve that lies between one standard deviation in either direction from the mean (or two standard deviations total) represents 68.3% of the area under the curve.  This is represented graphically in the following figure.&lt;br /&gt;
&lt;br /&gt;
[[Image:WB.9 68.3.png|center|400px| ]]&lt;br /&gt;
&lt;br /&gt;
Consequently,  the interval between &amp;lt;math&amp;gt;Q(t)=84.15%\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Q(t)=15.85%\,\!&amp;lt;/math&amp;gt; represents two standard deviations, since this is an interval of 68.3% ( &amp;lt;math&amp;gt;84.15-15.85=68.3\,\!&amp;lt;/math&amp;gt; ), and is centered on the mean at 50%.  As a result, the standard deviation can be estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{\sigma }=\frac{t(Q=84.15%)-t(Q=15.85%)}{2}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That is: the value of &amp;lt;math&amp;gt;\widehat{\sigma }\,\!&amp;lt;/math&amp;gt; is obtained by subtracting the time value where the plotted line crosses the 84.15% unreliability line from the time value where the plotted line crosses the 15.85% unreliability line and dividing the result by two.  This process is illustrated in the following example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Normal Distribution Probability Plotting Example====&lt;br /&gt;
{{:Normal_Distribution_Probability_Plotting_Example}}&lt;br /&gt;
&lt;br /&gt;
===Rank Regression on Y===&lt;br /&gt;
Performing rank regression on Y requires that a straight line be fitted to a set of data points such that the sum of the squares of the vertical deviations from the points to the line is minimized.&lt;br /&gt;
&lt;br /&gt;
The least squares parameter estimation method (regression analysis) was discussed in [[Parameter Estimation]], and the following equations for regression on Y were derived:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}\hat{a}= &amp;amp; \bar{b}-\hat{b}\bar{x}  \\&lt;br /&gt;
                    =&amp;amp; \frac{\sum_{i=1}^N y_{i}}{N}-\hat{b}\frac{\sum_{i=1}^{N}x_{i}}{N}\\&lt;br /&gt;
     \end{align} &lt;br /&gt;
   \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the case of the normal distribution, the equations for &amp;lt;math&amp;gt;{{y}_{i}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{i}}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{y}_{i}}={{\Phi }^{-1}}\left[ F({{t}_{i}}) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{x}_{i}}={{t}_{i}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the values for &amp;lt;math&amp;gt;F({{T}_{i}})\,\!&amp;lt;/math&amp;gt; are estimated from the median ranks. Once &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; are obtained, &amp;lt;math&amp;gt;\widehat{\sigma }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{\mu }\,\!&amp;lt;/math&amp;gt; can easily be obtained from above equations.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Correlation Coefficient&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The estimator of the sample correlation coefficient, &amp;lt;math&amp;gt;\hat{\rho }\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\rho }=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,({{x}_{i}}-\overline{x})({{y}_{i}}-\overline{y})}{\sqrt{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{({{x}_{i}}-\overline{x})}^{2}}\cdot \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{({{y}_{i}}-\overline{y})}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====RRY Example====&lt;br /&gt;
&#039;&#039;&#039;Normal Distribution RRY Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
14 units were reliability tested and the following life test data were obtained. Assuming the data follow a normal distribution, estimate the parameters and determine the correlation coefficient, &amp;lt;math&amp;gt;\rho\,\!&amp;lt;/math&amp;gt;, using rank regression on Y.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; style=&amp;quot;text-align:center&amp;quot;|The test data&lt;br /&gt;
|- &lt;br /&gt;
!Data point index&lt;br /&gt;
!Time-to-failure&lt;br /&gt;
|- &lt;br /&gt;
|1 ||5&lt;br /&gt;
|- &lt;br /&gt;
|2 ||10&lt;br /&gt;
|- &lt;br /&gt;
|3 ||15&lt;br /&gt;
|- &lt;br /&gt;
|4 ||20&lt;br /&gt;
|- &lt;br /&gt;
|5 ||25&lt;br /&gt;
|- &lt;br /&gt;
|6 ||30&lt;br /&gt;
|-&lt;br /&gt;
|7||35&lt;br /&gt;
|-&lt;br /&gt;
|8||40&lt;br /&gt;
|-&lt;br /&gt;
|9||50&lt;br /&gt;
|-&lt;br /&gt;
|10||60&lt;br /&gt;
|-&lt;br /&gt;
|11||70&lt;br /&gt;
|-&lt;br /&gt;
|12||80&lt;br /&gt;
|-&lt;br /&gt;
|13||90&lt;br /&gt;
|-&lt;br /&gt;
|14||100&lt;br /&gt;
|} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Construct a table like the one shown next.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\overset{{}}{\mathop{\text{Least Squares Analysis}}}\,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \text{N} &amp;amp; \text{T}_{i} &amp;amp; \text{F(T}_{i}\text{)} &amp;amp; \text{y}_{i} &amp;amp; \text{T}_{i}^{2} &amp;amp; \text{y}_{i}^{2} &amp;amp; \text{T}_{i}\text{ y}_{i}  \\&lt;br /&gt;
   \text{1} &amp;amp; \text{5} &amp;amp; \text{0}\text{.0483} &amp;amp; \text{-1}\text{.6619} &amp;amp; \text{25} &amp;amp; \text{2}\text{.7619} &amp;amp; \text{-8}\text{.3095}  \\&lt;br /&gt;
   \text{2} &amp;amp; \text{10} &amp;amp; \text{0}\text{.1170} &amp;amp; \text{-1}\text{.1901} &amp;amp; \text{100} &amp;amp; \text{1}\text{.4163} &amp;amp; \text{-11}\text{.9010}  \\&lt;br /&gt;
   \text{3} &amp;amp; \text{15} &amp;amp; \text{0}\text{.1865} &amp;amp; \text{-0}\text{.8908} &amp;amp; \text{225} &amp;amp; \text{0}\text{.7935} &amp;amp; \text{-13}\text{.3620}  \\&lt;br /&gt;
   \text{4} &amp;amp; \text{20} &amp;amp; \text{0}\text{.2561} &amp;amp; \text{-0}\text{.6552} &amp;amp; \text{400} &amp;amp; \text{0}\text{.4292} &amp;amp; \text{-13}\text{.1030}  \\&lt;br /&gt;
   \text{5} &amp;amp; \text{25} &amp;amp; \text{0}\text{.3258} &amp;amp; \text{-0}\text{.4512} &amp;amp; \text{625} &amp;amp; \text{0}\text{.2036} &amp;amp; \text{-11}\text{.2800}  \\&lt;br /&gt;
   \text{6} &amp;amp; \text{30} &amp;amp; \text{0}\text{.3954} &amp;amp; \text{-0}\text{.2647} &amp;amp; \text{900} &amp;amp; \text{0}\text{.0701} &amp;amp; \text{-7}\text{.9422}  \\&lt;br /&gt;
   \text{7} &amp;amp; \text{35} &amp;amp; \text{0}\text{.4651} &amp;amp; \text{-0}\text{.0873} &amp;amp; \text{1225} &amp;amp; \text{0}\text{.0076} &amp;amp; \text{-3}\text{.0542}  \\&lt;br /&gt;
   \text{8} &amp;amp; \text{40} &amp;amp; \text{0}\text{.5349} &amp;amp; \text{0}\text{.0873} &amp;amp; \text{1600} &amp;amp; \text{0}\text{.0076} &amp;amp; \text{3}\text{.4905}  \\&lt;br /&gt;
   \text{9} &amp;amp; \text{50} &amp;amp; \text{0}\text{.6046} &amp;amp; \text{0}\text{.2647} &amp;amp; \text{2500} &amp;amp; \text{0}\text{.0701} &amp;amp; \text{13}\text{.2370}  \\&lt;br /&gt;
   \text{10} &amp;amp; \text{60} &amp;amp; \text{0}\text{.6742} &amp;amp; \text{0}\text{.4512} &amp;amp; \text{3600} &amp;amp; \text{0}\text{.2036} &amp;amp; \text{27}\text{.0720}  \\&lt;br /&gt;
   \text{11} &amp;amp; \text{70} &amp;amp; \text{0}\text{.7439} &amp;amp; \text{0}\text{.6552} &amp;amp; \text{4900} &amp;amp; \text{0}\text{.4292} &amp;amp; \text{45}\text{.8605}  \\&lt;br /&gt;
   \text{12} &amp;amp; \text{80} &amp;amp; \text{0}\text{.8135} &amp;amp; \text{0}\text{.8908} &amp;amp; \text{6400} &amp;amp; \text{0}\text{.7935} &amp;amp; \text{71}\text{.2640}  \\&lt;br /&gt;
   \text{13} &amp;amp; \text{90} &amp;amp; \text{0}\text{.8830} &amp;amp; \text{1}\text{.1901} &amp;amp; \text{8100} &amp;amp; \text{1}\text{.4163} &amp;amp; \text{107}\text{.1090}  \\&lt;br /&gt;
   \text{14} &amp;amp; \text{100} &amp;amp; \text{0}\text{.9517} &amp;amp; \text{1}\text{.6619} &amp;amp; \text{10000} &amp;amp; \text{2}\text{.7619} &amp;amp; \text{166}\text{.1900}  \\&lt;br /&gt;
   \mathop{}_{}^{} &amp;amp; \text{630} &amp;amp; {} &amp;amp; \text{0} &amp;amp; \text{40600} &amp;amp; \text{11}\text{.3646} &amp;amp; \text{365}\text{.2711}  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The median rank values ( &amp;lt;math&amp;gt;F({{t}_{i}})\,\!&amp;lt;/math&amp;gt; ) can be found in rank tables, available in many statistical texts, or they can be estimated by using the Quick Statistical Reference in Weibull++.&lt;br /&gt;
*The &amp;lt;math&amp;gt;{{y}_{i}}\,\!&amp;lt;/math&amp;gt; values were obtained from standardized normal distribution&#039;s area tables by entering for &amp;lt;math&amp;gt;F(z)\,\!&amp;lt;/math&amp;gt; and getting the corresponding &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; value ( &amp;lt;math&amp;gt;{{y}_{i}}\,\!&amp;lt;/math&amp;gt; ).  As with the median rank values, these standard normal values can be obtained with the Quick Statistical Reference.&lt;br /&gt;
&lt;br /&gt;
Given the values in the table above, calculate &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; using:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \widehat{b}= &amp;amp; \frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{T}_{i}}{{y}_{i}}-(\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{T}_{i}})(\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}})/14}{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,T_{i}^{2}-{{(\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{T}_{i}})}^{2}}/14} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \widehat{b}= &amp;amp; \frac{365.2711-(630)(0)/14}{40,600-{{(630)}^{2}}/14}=0.02982  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{a}=\overline{y}-\widehat{b}\overline{T}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\widehat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{t}_{i}}}{N}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{a}=\frac{0}{14}-(0.02982)\frac{630}{14}=-1.3419\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{\sigma}=\frac{1}{\hat{b}}=\frac{1}{0.02982}=33.5367\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{\mu }=-\widehat{a}\cdot \widehat{\sigma }=-(-1.3419)\cdot 33.5367\simeq 45\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or &amp;lt;math&amp;gt;\widehat{\mu }=45\,\!&amp;lt;/math&amp;gt; hours &amp;lt;math&amp;gt;.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The correlation coefficient can be estimated using:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{\rho }=0.979\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The preceding example can be repeated using Weibull++ .&lt;br /&gt;
&lt;br /&gt;
*Create a new folio for Times-to-Failure data, and enter the data given in this example.&lt;br /&gt;
*Choose Normal from the Distributions list.&lt;br /&gt;
*Go to the Analysis page and select Rank Regression on Y (RRY).&lt;br /&gt;
*Click the Calculate icon located on the Main page.&lt;br /&gt;
&lt;br /&gt;
[[Image:Normal RRY Setting.png|center|650px| ]]&lt;br /&gt;
&lt;br /&gt;
The probability plot is shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:Normal RRY Plot.png|center|650px| ]]&lt;br /&gt;
&lt;br /&gt;
===Rank Regression on X===&lt;br /&gt;
As was mentioned previously, performing a rank regression on X requires that a straight line be fitted to a set of data points such that the sum of the squares of the horizontal deviations from the points to the fitted line is minimized.&lt;br /&gt;
&lt;br /&gt;
Again, the first task is to bring our function, the probability of failure function for normal distribution, into a linear form. This step is exactly the same as in regression on Y analysis. All other equations apply in this case as they did for the regression on Y. The deviation from the previous analysis begins on the least squares fit step where: in this case, we treat &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; as the dependent variable and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; as the independent variable. The best-fitting straight line for the data, for regression on X, is the straight line:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x=\widehat{a}+\widehat{b}y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding equations for &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{y}_{i}}={{\Phi }^{-1}}\left[ F({{t}_{i}}) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{x}_{i}}={{t}_{i}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the &amp;lt;math&amp;gt;F({{t}_{i}})\,\!&amp;lt;/math&amp;gt; values are estimated from the median ranks. Once &amp;lt;math&amp;gt;\widehat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; are obtained, solve the above linear equation for the unknown value of &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; which corresponds to:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;y=-\frac{\widehat{a}}{\widehat{b}}+\frac{1}{\widehat{b}}x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for the parameters, we get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;a=-\frac{\widehat{a}}{\widehat{b}}=-\frac{\mu }{\sigma }\Rightarrow \mu =\widehat{a}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;b=\frac{1}{\widehat{b}}=\frac{1}{\sigma }\Rightarrow \sigma =\widehat{b}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The correlation coefficient is evaluated as before.&lt;br /&gt;
====RRX Example====&lt;br /&gt;
&#039;&#039;&#039;Normal Distribution RRX Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Using the same data set from the [[The_Normal_Distribution#RRY_Example|RRY example given above]], and assuming a normal distribution, estimate the parameters and determine the correlation coefficient, &amp;lt;math&amp;gt;\rho \,\!&amp;lt;/math&amp;gt;, using rank regression on X.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The table constructed for the RRY analysis applies to this example also. Using the values on this table, we get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   \hat{b}= &amp;amp; \frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{t}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{t}_{i}}\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}}{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{14}} \\ &lt;br /&gt;
  \widehat{b}= &amp;amp; \frac{365.2711-(630)(0)/14}{11.3646-{{(0)}^{2}}/14}=32.1411  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{t}_{i}}}{14}-\widehat{b}\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{a}=\frac{630}{14}-(32.1411)\frac{(0)}{14}=45\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{\sigma }=\widehat{b}=32.1411\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{\mu }=\widehat{a}=45\text{ hours}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The correlation coefficient is obtained as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{\rho }=0.979\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the results for regression on X are not necessarily the same as the results for regression on Y. The only time when the two regressions are the same (i.e., will yield the same equation for a line) is when the data lie perfectly on a straight line.&lt;br /&gt;
&lt;br /&gt;
The plot of the Weibull++ solution for this example is shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:Normal RRX Plot.png|center|650px| ]]&lt;br /&gt;
&lt;br /&gt;
===Maximum Likelihood Estimation===&lt;br /&gt;
As it was outlined in [[Parameter Estimation]], maximum likelihood estimation works by developing a likelihood function based on the available data and finding the values of the parameter estimates that maximize the likelihood function. This can be achieved by using iterative methods to determine the parameter estimate values that maximize the likelihood function. This can be rather difficult and time-consuming, particularly when dealing with the three-parameter distribution.  Another method of finding the parameter estimates involves taking the partial derivatives of the likelihood function with respect to the parameters, setting the resulting equations equal to zero, and solving simultaneously to determine the values of the parameter estimates. The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the normal distribution are covered in the [[Appendix:_Log-Likelihood_Equations|Appendix]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Special Note About Bias&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Estimators (i.e., parameter estimates) have properties such as unbiasedness, minimum variance, sufficiency, consistency, squared error constancy, efficiency and completeness, as discussed in Dudewicz and Mishra [[Appendix:_Life_Data_Analysis_References|[7]]] and in Dietrich [[Appendix:_Life_Data_Analysis_References|[5]]]. Numerous books and papers deal with these properties and this coverage is beyond the scope of this reference.&lt;br /&gt;
&lt;br /&gt;
However, we would like to briefly address one of these properties, unbiasedness. An estimator is said to be unbiased if the estimator &amp;lt;math&amp;gt;\widehat{\theta }=d({{X}_{1,}}{{X}_{2,}}...,{{X}_{n)}}\,\!&amp;lt;/math&amp;gt; satisfies the condition &amp;lt;math&amp;gt;E\left[ \widehat{\theta } \right]\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;=\theta \,\!&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;\theta \in \Omega .\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;math&amp;gt;E\left[ X \right]\,\!&amp;lt;/math&amp;gt; denotes the expected value of X and is defined (for continuous distributions) by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   E\left[ X \right]=  \int_{\varpi }x\cdot f(x)dx \\ &lt;br /&gt;
  X\in  &amp;amp; \varpi .  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be shown in Dudewicz and Mishra [[Appendix:_Life_Data_Analysis_References|[7]]] and in Dietrich [[Appendix:_Life_Data_Analysis_References|[5]]] that the MLE estimator for the mean of the normal (and lognormal) distribution does satisfy the unbiasedness criteria, or &amp;lt;math&amp;gt;E\left[ \widehat{\mu } \right]\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;=\mu .\,\!&amp;lt;/math&amp;gt; The same is not true for the estimate of the variance &amp;lt;math&amp;gt;\hat{\sigma }^{2}\,\!&amp;lt;/math&amp;gt;. The maximum likelihood estimate for the variance for the normal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{\sigma }^{2}=\frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{t}_{i}}-\bar{T})}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
with a standard deviation of: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\hat{\sigma }}}=\sqrt{\frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{t}_{i}}-\bar{T})}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These estimates, however, have been shown to be biased. It can be shown in Dudewicz and Mishra [[Appendix:_Life_Data_Analysis_References|[7]]] and in Dietrich [[Appendix:_Life_Data_Analysis_References|[5]]] that the unbiased estimate of the variance and standard deviation for complete data is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \hat{\sigma }^{2}= &amp;amp; \left[ \frac{N}{N-1} \right]\cdot \left[ \frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{t}_{i}}-\bar{T})}^{2}} \right]=\frac{1}{N-1}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{t}_{i}}-\bar{T})}^{2}} \\ &lt;br /&gt;
  {{{\hat{\sigma }}}}= &amp;amp; \sqrt{\left[ \frac{N}{N-1} \right]\cdot \left[ \frac{1}{N}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{t}_{i}}-\bar{T})}^{2}} \right]} \\ &lt;br /&gt;
  = &amp;amp; \sqrt{\frac{1}{N-1}\underset{i=1}{\overset{N}{\mathop \sum }}\,{{({{t}_{i}}-\bar{T})}^{2}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that for larger values of &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sqrt{\left[ N/(N-1) \right]}\,\!&amp;lt;/math&amp;gt; tends to 1.&lt;br /&gt;
&lt;br /&gt;
The Use Unbiased Std on Normal Data option on the Calculations page of the User Setup allows biasing to be considered when estimating the parameters.&lt;br /&gt;
&lt;br /&gt;
When this option is selected, Weibull++ returns the unbiased standard deviation as defined. This is only true for complete data sets. For all other data types, Weibull++ by default returns the biased standard deviation as defined above regardless of the selection status of this option.  The next figure shows this setting in Weibull++.&lt;br /&gt;
&lt;br /&gt;
[[Image:Weibull Calculation User Setting.png|center|550px| ]]&lt;br /&gt;
&lt;br /&gt;
====MLE Example====&lt;br /&gt;
&#039;&#039;&#039;Normal Distribution MLE Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Using the same data set from the [[The_Normal_Distribution#RRY_Example|RRY and RRX examples given above]] and assuming a normal distribution, estimate the parameters using the MLE method.&lt;br /&gt;
  &lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this example we have non-grouped data without suspensions and without interval data. The partial derivatives of the normal log-likelihood function, &amp;lt;math&amp;gt;\Lambda ,\,\!&amp;lt;/math&amp;gt; are given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   \frac{\partial \Lambda }{\partial \mu }= &amp;amp; \frac{1}{{{\sigma }^{2}}}\underset{i=1}{\overset{14}{\mathop \sum }}\,({{t}_{i}}-\mu )=0 \\ &lt;br /&gt;
  \frac{\partial \Lambda }{\partial \sigma }= &amp;amp; \underset{i=1}{\overset{14}{\mathop \sum }}\,\left( \frac{{{t}_{i}}-\mu }{{{\sigma }^{3}}}-\frac{1}{\sigma } \right)=0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(The derivations of these equations are presented in the[[Appendix:_Log-Likelihood_Equations| appendix]].) Substituting the values of &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; and solving the above system simultaneously, we get &amp;lt;math&amp;gt;\widehat{\sigma }=29.58\,\!&amp;lt;/math&amp;gt; hours &amp;lt;math&amp;gt;,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{\mu }=45\,\!&amp;lt;/math&amp;gt; hours &amp;lt;math&amp;gt;.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The Fisher matrix is: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   \widehat{Var}\left( \widehat{\mu } \right)=62.5000 &amp;amp; {} &amp;amp; \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right)=0.0000  \\&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   \widehat{Cov}\left( \widehat{\mu },\widehat{\sigma } \right)=0.0000 &amp;amp; {} &amp;amp; \widehat{Var}\left( \widehat{\sigma } \right)=31.2500  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The plot of the Weibull++ solution for this example is shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:Normal MLE Plot.png|center|650px| ]]&lt;br /&gt;
&lt;br /&gt;
==Confidence Bounds==&lt;br /&gt;
The method used by the application in estimating the different types of confidence bounds for normally distributed data is presented in this section. The complete derivations were presented in detail (for a general function) in [[Confidence Bounds]].&lt;br /&gt;
&lt;br /&gt;
===Exact Confidence Bounds===&lt;br /&gt;
There are closed-form solutions for exact confidence bounds for both the normal and lognormal distributions. However these closed-forms solutions only apply to complete data. To achieve consistent application across all possible data types, Weibull++ always uses the Fisher matrix method or likelihood ratio method in computing confidence intervals.&lt;br /&gt;
&lt;br /&gt;
===Fisher Matrix Confidence Bounds===&lt;br /&gt;
====Bounds on the Parameters====&lt;br /&gt;
The lower and upper bounds on the mean, &amp;lt;math&amp;gt;\widehat{\mu }\,\!&amp;lt;/math&amp;gt;, are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\mu }_{U}}= &amp;amp; \widehat{\mu }+{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{\mu }_{L}}= &amp;amp; \widehat{\mu }-{{K}_{\alpha }}\sqrt{Var(\widehat{\mu })}\text{ (lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the standard deviation, &amp;lt;math&amp;gt;{{\widehat{\sigma }}}\,\!&amp;lt;/math&amp;gt;, must be positive, &amp;lt;math&amp;gt;\ln ({{\widehat{\sigma }}})\,\!&amp;lt;/math&amp;gt; is treated as normally distributed, and the bounds are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma }_{U}}= &amp;amp; {{\widehat{\sigma }}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}})}}{{{\widehat{\sigma }}}}}}\text{ (upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{\sigma }_{L}}= &amp;amp; \frac{{{\widehat{\sigma }}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}})}}{{{\widehat{\sigma }}}}}}}\text{ (lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level, then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds.&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;\widehat{\mu }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\widehat{\sigma }}}\,\!&amp;lt;/math&amp;gt; are estimated from the Fisher matrix, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left( \begin{matrix}&lt;br /&gt;
   \widehat{Var}\left( \widehat{\mu } \right) &amp;amp; \widehat{Cov}\left( \widehat{\mu },{{\widehat{\sigma }}} \right)  \\&lt;br /&gt;
   \widehat{Cov}\left( \widehat{\mu },{{\widehat{\sigma }}} \right) &amp;amp; \widehat{Var}\left( {{\widehat{\sigma }}} \right)  \\&lt;br /&gt;
\end{matrix} \right)=\left( \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\mu }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial {{\sigma }}}  \\&lt;br /&gt;
   {} &amp;amp; {}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \mu \partial {{\sigma }}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma^{2}}  \\&lt;br /&gt;
\end{matrix} \right)_{\mu =\widehat{\mu },\sigma =\widehat{\sigma }}^{-1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Lambda \,\!&amp;lt;/math&amp;gt; is the log-likelihood function of the normal distribution, described in &lt;br /&gt;
[[Parameter Estimation]] and [[Appendix:_Log-Likelihood_Equations|Appendix D]].&lt;br /&gt;
&lt;br /&gt;
====Bounds on Reliability====&lt;br /&gt;
The reliability of the normal distribution is: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(t;\hat{\mu },{{\hat{\sigma }}})=\int_{t}^{\infty }\frac{1}{{{\widehat{\sigma }}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\widehat{\mu }}{{{\widehat{\sigma }}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\widehat{z}=\tfrac{t-\widehat{\mu }}{{{\widehat{\sigma }}}}\,\!&amp;lt;/math&amp;gt;, the above equation then becomes: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{R}(\widehat{z})=\int_{\widehat{z}(t)}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The bounds on &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{z}_{U}}= &amp;amp; \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ &lt;br /&gt;
 &amp;amp; {{z}_{L}}= &amp;amp; \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var(\widehat{z})={{\left( \frac{\partial \hat{z}}{\partial \mu } \right)}^{2}}Var(\widehat{\mu })+{{\left( \frac{\partial \hat{z}}{\partial {{\sigma }}} \right)}^{2}}Var({{\widehat{\sigma }}})+2\left( \frac{\partial \hat{z}}{\partial \mu } \right)\left( \frac{\partial \hat{z}}{\partial {{\sigma }}} \right)Cov\left( \widehat{\mu },{{\widehat{\sigma }}} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var(\widehat{z})=\frac{1}{\widehat{\sigma }^{2}}\left[ Var(\widehat{\mu })+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}})+2\cdot \widehat{z}\cdot Cov\left( \widehat{\mu },{{\widehat{\sigma }}} \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Bounds on Time====&lt;br /&gt;
The bounds around time for a given normal percentile (unreliability) are estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\hat{T}(\widehat{\mu },{{\widehat{\sigma }}})=\widehat{\mu }+z\cdot {{\widehat{\sigma }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z={{\Phi }^{-1}}\left[ F(T) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to calculate the variance of &amp;lt;math&amp;gt;\hat{T}(\widehat{\mu },{{\widehat{\sigma }}})\,\!&amp;lt;/math&amp;gt; or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\hat{T})= &amp;amp; {{\left( \frac{\partial \hat{T}}{\partial \mu } \right)}^{2}}Var(\widehat{\mu })+{{\left( \frac{\partial \hat{T}}{\partial {{\sigma }}} \right)}^{2}}Var({{\widehat{\sigma }}}) \\ &lt;br /&gt;
   &amp;amp; +2\left( \frac{\partial \hat{T}}{\partial \mu } \right)\left( \frac{\partial \hat{T}}{\partial {{\sigma }}} \right)Cov\left( \widehat{\mu },{{\widehat{\sigma }}} \right) \\ &lt;br /&gt;
  Var(\hat{T})= &amp;amp; Var(\widehat{\mu })+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}})+2\cdot z\cdot Cov\left( \widehat{\mu },{{\widehat{\sigma }}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; \hat{T}+{{K}_{\alpha }}\sqrt{Var(\hat{T})}\text{ (upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; \hat{T}-{{K}_{\alpha }}\sqrt{Var(\hat{T})}\text{ (lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Likelihood Ratio Confidence Bounds===&lt;br /&gt;
====Bounds on Parameters====&lt;br /&gt;
As covered in [[Confidence Bounds]], the likelihood confidence bounds are calculated by finding values for &amp;lt;math&amp;gt;{{\theta }_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\theta }_{2}}\,\!&amp;lt;/math&amp;gt; that satisfy: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;-2\cdot \text{ln}\left( \frac{L({{\theta }_{1}},{{\theta }_{2}})}{L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})} \right)=\chi _{\alpha ;1}^{2}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This equation can be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L({{\theta }_{1}},{{\theta }_{2}})=L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For complete data, the likelihood formula for the normal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(\mu ,\sigma )=\underset{i=1}{\overset{N}{\mathop \prod }}\,f({{t}_{i}};\mu ,\sigma )=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{\sigma \cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}_{i}}-\mu }{\sigma } \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; values represent the original time to failure data.  For a given value of &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;, values for &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt; can be found which represent the maximum and minimum values that satisfy the above likelihood ratio equation. These represent the confidence bounds for the parameters at a confidence level &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt;,  where &amp;lt;math&amp;gt;\alpha =\delta \,\!&amp;lt;/math&amp;gt; for two-sided bounds and &amp;lt;math&amp;gt;\alpha =2\delta -1\,\!&amp;lt;/math&amp;gt; for one-sided.&lt;br /&gt;
&lt;br /&gt;
=====Example: LR Bounds on Parameters=====&lt;br /&gt;
Five units are put on a reliability test and experience failures at 12, 24, 28, 34, and 46 hours. Assuming a normal distribution, the MLE parameter estimates are calculated to be &amp;lt;math&amp;gt;\widehat{\mu }=28.8\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{\sigma }=11.2143.\,\!&amp;lt;/math&amp;gt; Calculate the two-sided 80% confidence bounds on these parameters using the likelihood ratio method.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The first step is to calculate the likelihood function for the parameter estimates: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L(\widehat{\mu },\widehat{\sigma })= &amp;amp; \underset{i=1}{\overset{N}{\mathop \prod }}\,f({{t}_{i}};\widehat{\mu },\widehat{\sigma })=\underset{i=1}{\overset{5}{\mathop \prod }}\,\frac{1}{\widehat{\sigma }\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}_{i}}-\widehat{\mu }}{\widehat{\sigma }} \right)}^{2}}}} \\ &lt;br /&gt;
  L(\widehat{\mu },\widehat{\sigma })= &amp;amp; \underset{i=1}{\overset{5}{\mathop \prod }}\,\frac{1}{11.2143\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}_{i}}-28.8}{11.2143} \right)}^{2}}}} \\ &lt;br /&gt;
  L(\widehat{\mu },\widehat{\sigma })= &amp;amp; 4.676897\times {{10}^{-9}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; are the original time-to-failure data points. We can now rearrange the likelihood ratio equation to the form: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(\mu ,\sigma )-L(\widehat{\mu },\widehat{\sigma })\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since our specified confidence level, &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt;, is 80%, we can calculate the value of the chi-squared statistic, &amp;lt;math&amp;gt;\chi _{0.8;1}^{2}=1.642374.\,\!&amp;lt;/math&amp;gt; We can now substitute this information into the equation: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L(\mu ,\sigma )-L(\widehat{\mu },\widehat{\sigma })\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}= &amp;amp; 0, \\ &lt;br /&gt;
 \\ &lt;br /&gt;
  L(\mu ,\sigma )-4.676897\times {{10}^{-9}}\cdot {{e}^{\tfrac{-1.642374}{2}}}= &amp;amp; 0, \\ &lt;br /&gt;
  \\ &lt;br /&gt;
  L(\mu ,\sigma )-2.057410\times {{10}^{-9}}= &amp;amp; 0.  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It now remains to find the values of &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt; which satisfy this equation. This is an iterative process that requires setting the value of &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; and finding the appropriate values of &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt;, and vice versa.&lt;br /&gt;
&lt;br /&gt;
The following table gives the values of &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt; based on given values of &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \text{ }\!\!\mu\!\!\text{ } &amp;amp; {{\text{ }\!\!\sigma\!\!\text{ }}_{\text{1}}} &amp;amp; {{\text{ }\!\!\sigma\!\!\text{ }}_{\text{2}}} &amp;amp; \text{ }\!\!\mu\!\!\text{ } &amp;amp; {{\text{ }\!\!\sigma\!\!\text{ }}_{\text{1}}} &amp;amp; {{\text{ }\!\!\sigma\!\!\text{ }}_{\text{2}}}  \\&lt;br /&gt;
   \text{22}\text{.0} &amp;amp; \text{12}\text{.045} &amp;amp; \text{14}\text{.354} &amp;amp; \text{29}\text{.0} &amp;amp; \text{7.849}&amp;amp; \text{19.909}    \\&lt;br /&gt;
   \text{22}\text{.5} &amp;amp; \text{11}\text{.004} &amp;amp; \text{15}\text{.310} &amp;amp; \text{29}\text{.5} &amp;amp; \text{7}\text{.876} &amp;amp; \text{17}\text{.889}  \\&lt;br /&gt;
   \text{23}\text{.0} &amp;amp; \text{10}\text{.341} &amp;amp; \text{15}\text{.894} &amp;amp; \text{30}\text{.0} &amp;amp; \text{7}\text{.935} &amp;amp; \text{17}\text{.844}  \\&lt;br /&gt;
   \text{23}\text{.5} &amp;amp; \text{9}\text{.832} &amp;amp; \text{16}\text{.328} &amp;amp; \text{30}\text{.5} &amp;amp; \text{8}\text{.025} &amp;amp; \text{17}\text{.776}  \\&lt;br /&gt;
   \text{24}\text{.0} &amp;amp; \text{9}\text{.418} &amp;amp; \text{16}\text{.673} &amp;amp; \text{31}\text{.0} &amp;amp; \text{8}\text{.147} &amp;amp; \text{17}\text{.683}  \\&lt;br /&gt;
   \text{24}\text{.5} &amp;amp; \text{9}\text{.074} &amp;amp; \text{16}\text{.954} &amp;amp; \text{31}\text{.5} &amp;amp; \text{8}\text{.304} &amp;amp; \text{17}\text{.562}  \\&lt;br /&gt;
   \text{25}\text{.0} &amp;amp; \text{8}\text{.784} &amp;amp; \text{17}\text{.186} &amp;amp; \text{32}\text{.0} &amp;amp; \text{8}\text{.498} &amp;amp; \text{17}\text{.411}  \\&lt;br /&gt;
   \text{25}\text{.5} &amp;amp; \text{8}\text{.542} &amp;amp; \text{17}\text{.377} &amp;amp; \text{32}\text{.5} &amp;amp; \text{8}\text{.732} &amp;amp; \text{17}\text{.227}  \\&lt;br /&gt;
   \text{26}\text{.0} &amp;amp; \text{8}\text{.340} &amp;amp; \text{17}\text{.534} &amp;amp; \text{33}\text{.0} &amp;amp; \text{9}\text{.012} &amp;amp; \text{17}\text{.004}  \\&lt;br /&gt;
   \text{26}\text{.5} &amp;amp; \text{8}\text{.176} &amp;amp; \text{17}\text{.661} &amp;amp; \text{33}\text{.5} &amp;amp; \text{9}\text{.344} &amp;amp; \text{16}\text{.734}  \\&lt;br /&gt;
   \text{27}\text{.0} &amp;amp; \text{8}\text{.047} &amp;amp; \text{17}\text{.760} &amp;amp; \text{34}\text{.0} &amp;amp; \text{9}\text{.742} &amp;amp; \text{16}\text{.403}  \\&lt;br /&gt;
   \text{27}\text{.5} &amp;amp; \text{7}\text{.950} &amp;amp; \text{17}\text{.833} &amp;amp; \text{34}\text{.5} &amp;amp; \text{10}\text{.229} &amp;amp; \text{15}\text{.990}  \\&lt;br /&gt;
   \text{28}\text{.0} &amp;amp; \text{7}\text{.885} &amp;amp; \text{17}\text{.882} &amp;amp; \text{35}\text{.0} &amp;amp; \text{10}\text{.854} &amp;amp; \text{15}\text{.444}  \\&lt;br /&gt;
   \text{28}\text{.5} &amp;amp; \text{7}\text{.852} &amp;amp; \text{17}\text{.907} &amp;amp; \text{35}\text{.5} &amp;amp; \text{11}\text{.772} &amp;amp; \text{14}\text{.609}  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This data set is represented graphically in the following contour plot:&lt;br /&gt;
&lt;br /&gt;
[[Image:WB.9 normal parameter contour plot.png|center|450px| ]]&lt;br /&gt;
&lt;br /&gt;
(Note that this plot is generated with degrees of freedom &amp;lt;math&amp;gt;k=1\,\!&amp;lt;/math&amp;gt;, as we are only determining bounds on one parameter. The contour plots generated in Weibull++ are done with degrees of freedom &amp;lt;math&amp;gt;k=2\,\!&amp;lt;/math&amp;gt;, for use in comparing both parameters simultaneously.) As can be determined from the table, the lowest calculated value for &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt; is 7.849, while the highest is 17.909.  These represent the two-sided 80% confidence limits on this parameter.  Since solutions for the equation do not exist for values of &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; below 22 or above 35.5, these can be considered the two-sided 80% confidence limits for this parameter. In order to obtain more accurate values for the confidence limits on &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt;, we can perform the same procedure as before, but finding the two values of &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; that correspond with a given value of &amp;lt;math&amp;gt;\sigma .\,\!&amp;lt;/math&amp;gt; Using this method, we find that the two-sided 80% confidence limits on &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; are 21.807 and 35.793, which are close to the initial estimates of 22 and 35.5.&lt;br /&gt;
&lt;br /&gt;
====Bounds on Time and Reliability====&lt;br /&gt;
In order to calculate the bounds on a time estimate for a given reliability, or on a reliability estimate for a given time, the likelihood function needs to be rewritten in terms of one parameter and time/reliability, so that the maximum and minimum values of the time can be observed as the parameter is varied. This can be accomplished by substituting a form of the normal reliability equation into the likelihood function. The normal reliability equation can be written as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R=1-\Phi \left( \frac{t-\mu }{\sigma } \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be rearranged to the form: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\mu =t-\sigma \cdot {{\Phi }^{-1}}(1-R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\Phi }^{-1}}\,\!&amp;lt;/math&amp;gt; is the inverse standard normal. This equation can now be substituted into the likelihood ratio equation to produce an equation in terms of &amp;lt;math&amp;gt;\sigma ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt;:  &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(\sigma ,t/R)=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{\sigma \cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}_{i}}-\left[ t-\sigma \cdot {{\Phi }^{-1}}(1-R) \right]}{\sigma } \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The unknown parameter &amp;lt;math&amp;gt;t/R\,\!&amp;lt;/math&amp;gt; depends on what type of bounds are being determined.  If one is trying to determine the bounds on time for a given reliability, then &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is a known constant and &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; is the unknown parameter. Conversely, if one is trying to determine the bounds on reliability for a given time, then &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; is a known constant and &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the unknown parameter. The likelihood ratio equation can be used to solve the values of interest.&lt;br /&gt;
&lt;br /&gt;
=====Example: LR Bounds on Time=====&lt;br /&gt;
For the same data set given above in [[The_Normal_Distribution#Example:_LR_Bounds_on_Parameters|the parameter bounds example]], determine the two-sided 80% confidence bounds on the time estimate for a reliability of 40%.  The ML estimate for the time at &amp;lt;math&amp;gt;R(t)=40%\,\!&amp;lt;/math&amp;gt; is 31.637.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this example, we are trying to determine the two-sided 80% confidence bounds on the time estimate of 31.637. This is accomplished by substituting &amp;lt;math&amp;gt;R=0.40\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha =0.8\,\!&amp;lt;/math&amp;gt; into the likelihood ratio equation for the normal distribution, and varying &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt; until the maximum and minimum values of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; are found. The following table gives the values of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; based on given values of &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Image:tabletbasedonsigma.png|center|350px| ]]&lt;br /&gt;
&lt;br /&gt;
This data set is represented graphically in the following contour plot:&lt;br /&gt;
&lt;br /&gt;
[[Image:WB.9 time v sigma contour.png|center|500px| ]]&lt;br /&gt;
&lt;br /&gt;
As can be determined from the table, the lowest calculated value for &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; is 25.046, while the highest is 39.250. These represent the 80% confidence limits on the time at which reliability is equal to 40%.&lt;br /&gt;
&lt;br /&gt;
=====Example: LR Bounds on Reliability=====&lt;br /&gt;
For the same data set given above in [[The_Normal_Distribution#Example:_LR_Bounds_on_Parameters|the parameter bounds and time bounds examples]], determine the two-sided 80% confidence bounds on the reliability estimate for &amp;lt;math&amp;gt;t=30\,\!&amp;lt;/math&amp;gt;.  The ML estimate for the reliability at &amp;lt;math&amp;gt;t=30\,\!&amp;lt;/math&amp;gt; is 45.739%.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this example, we are trying to determine the two-sided 80% confidence bounds on the reliability estimate of 45.739%. This is accomplished by substituting &amp;lt;math&amp;gt;t=30\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha =0.8\,\!&amp;lt;/math&amp;gt; into the likelihood ratio equation for normal distribution, and varying &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt; until the maximum and minimum values of &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; are found. The following table gives the values of &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; based on given values of &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Image:tablerbasedonsigma.png|center|350px| ]]&lt;br /&gt;
&lt;br /&gt;
This data set is represented graphically in the following contour plot:&lt;br /&gt;
&lt;br /&gt;
[[Image:WB.9 reliability v sigma.png|center|500px| ]] &lt;br /&gt;
&lt;br /&gt;
As can be determined from the table, the lowest calculated value for &amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is 24.776%, while the highest is 68.000%. These represent the 80% two-sided confidence limits on the reliability at &amp;lt;math&amp;gt;t=30\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Bayesian Confidence Bounds===&lt;br /&gt;
====Bounds on Parameters====&lt;br /&gt;
From [[Confidence Bounds]], we know that the marginal posterior distribution of &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; can be written as:  &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   f(\mu |Data)= &amp;amp; \int_{0}^{\infty }f(\mu ,\sigma |Data)d\sigma  \\ &lt;br /&gt;
  = &amp;amp; \frac{\int_{0}^{\infty }L(Data|\mu ,\sigma )\varphi (\mu )\varphi (\sigma )d\sigma }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|\mu ,\sigma )\varphi (\mu )\varphi (\sigma )d\mu d\sigma }  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\varphi (\sigma )\,\!&amp;lt;/math&amp;gt; = &amp;lt;math&amp;gt;\tfrac{1}{\sigma }\,\!&amp;lt;/math&amp;gt; is the non-informative prior of &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\varphi (\mu )\,\!&amp;lt;/math&amp;gt; is a uniform distribution from - &amp;lt;math&amp;gt;\infty \,\!&amp;lt;/math&amp;gt; to + &amp;lt;math&amp;gt;\infty \,\!&amp;lt;/math&amp;gt;, the non-informative prior of &amp;lt;math&amp;gt;\mu .\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Using the above prior distributions, &amp;lt;math&amp;gt;f(\mu |Data)\,\!&amp;lt;/math&amp;gt; can be rewritten as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(\mu |Data)=\frac{\int_{0}^{\infty }L(Data|\mu ,\sigma )\tfrac{1}{\sigma }d\sigma }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|\mu ,\sigma )\tfrac{1}{\sigma }d\mu d\sigma }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The one-sided upper bound of  &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=P(\mu \le {{\mu }_{U}})=\int_{-\infty }^{{{\mu }_{U}}}f(\mu |Data)d\mu \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The one-sided lower bound of &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;1-CL=P(\mu \le {{\mu }_{L}})=\int_{-\infty }^{{{\mu }_{L}}}f(\mu |Data)d\mu \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The two-sided bounds of &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=P({{\mu }_{L}}\le \mu \le {{\mu }_{U}})=\int_{{{\mu }_{L}}}^{{{\mu }_{U}}}f(\mu |Data)d\mu \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same method can be used to obtained the bounds of &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Bounds on Time (Type 1)====&lt;br /&gt;
The reliable life for the normal distribution is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
T=\mu +\sigma {{\Phi }^{-1}}(1-R) &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The one-sided upper bound on time is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(T\le {{T}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\mu +\sigma {{\Phi }^{-1}}(1-R)\le {{T}_{U}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above equation can be rewritten in terms of &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\mu \le {{T}_{U}}-\sigma {{\Phi }^{-1}}(1-R))\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the posterior distribution of &amp;lt;math&amp;gt;\mu\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{{{T}_{U}}-\sigma {{\Phi }^{-1}}(1-R)}L(\sigma ,\mu )\tfrac{1}{\sigma }d\mu d\sigma }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(\sigma ,\mu )\tfrac{1}{\sigma }d\mu d\sigma }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same method can be applied for one-sided lower bounds and two-sided bounds on time. &lt;br /&gt;
&lt;br /&gt;
====Bounds on Reliability (Type 2)====&lt;br /&gt;
The one-sided upper bound on reliability is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(R\le {{R}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,(\mu \le T-\sigma {{\Phi }^{-1}}(1-{{R}_{U}}))\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the posterior distribution of &amp;lt;math&amp;gt;\mu\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{T-\sigma {{\Phi }^{-1}}(1-{{R}_{U}})}L(\sigma ,\mu )\tfrac{1}{\sigma }d\mu d\sigma }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(\sigma ,\mu )\tfrac{1}{\sigma }d\mu d\sigma }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same method can be used to calculate the one-sided lower bounds and the two-sided bounds on reliability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Normal Distribution Examples==&lt;br /&gt;
{{:Normal Distribution Examples}}&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Appendix_A:_Generating_Random_Numbers_from_a_Distribution&amp;diff=64936</id>
		<title>Appendix A: Generating Random Numbers from a Distribution</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Appendix_A:_Generating_Random_Numbers_from_a_Distribution&amp;diff=64936"/>
		<updated>2017-02-08T21:41:32Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:bsbook SUB|Appendix A|Generating Random Numbers from a Distribution}}&lt;br /&gt;
Simulation involves generating random numbers that belong to a specific distribution. We will illustrate this methodology using the Weibull distribution. &lt;br /&gt;
&lt;br /&gt;
=Generating Random Times from a Weibull Distribution=&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;cdf&#039;&#039; of the 2-parameter Weibull distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(T)=1-{{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 R(T)= &amp;amp; 1-F(t) \\ &lt;br /&gt;
 = &amp;amp; {{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To generate a random time from a Weibull distribution, with a given  &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;  and  &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;,  a uniform random number from 0 to 1,  &amp;lt;math&amp;gt;{{U}_{R}}[0,1]\,\!&amp;lt;/math&amp;gt; , is first obtained.  The random time from a weibull distribution is then obtained from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{T}_{R}}=\eta \cdot {{\left\{ -\ln \left[ {{U}_{R}}[0,1] \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Conditional==&lt;br /&gt;
The Weibull conditional reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t|T)=\frac{R(T+t)}{R(T)}=\frac{{{e}^{-{{\left( \tfrac{T+t}{\eta } \right)}^{\beta }}}}}{{{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The random time would be the solution for &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;R(t|T)={{U}_{R}}[0,1]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=BlockSim&#039;s Random Number Generator (RNG)=&lt;br /&gt;
&lt;br /&gt;
Internally, ReliaSoft&#039;s BlockSim uses an algorithm based on L&#039;Ecuyer&#039;s [RefX] random number generator with a post Bays-Durham shuffle.  The RNG&#039;s period is approximately  10^18. The RNG passes all currently known statistical tests, within the limits the machine&#039;s precision, and for a number of calls (simulation runs) less than the period. If no seed is provided the algorithm uses the machines clock to initialize the RNG.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
#L&#039;Ecuyer, P., 1988, Communications of the ACM, vol. 31, pp.724-774&lt;br /&gt;
#L&#039;Ecuyer, P., 2001, Proceedings of the 2001 Winter Simulation Conference, pp.95-105&lt;br /&gt;
#William H., Teukolsky, Saul A., Vetterling, William T., Flannery, Brian R., Numerical Recipes in C: The Art of Scientific Computing, Second Edition, Cambridge University Press, 1988.&lt;br /&gt;
#Peters, Edgar E., Fractal Market Analysis: Applying Chaos Theory to Investment &amp;amp; Economics, John Wiley &amp;amp; Sons, 1994.&lt;br /&gt;
#Knuth, Donald E., The Art of Computer Programming: Volume 2 - Seminumerical Algorithms, Third Edition, Addison-Wesley, 1998.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Preventive_Maintenance&amp;diff=64935</id>
		<title>Preventive Maintenance</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Preventive_Maintenance&amp;diff=64935"/>
		<updated>2017-02-08T21:17:37Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* The Fallacy of &amp;quot;Constant Failure Rate&amp;quot; and &amp;quot;Preventive Replacement&amp;quot; */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner BlockSim Articles}}{{Navigation box}}&lt;br /&gt;
&#039;&#039;This article, which discusses preventive maintenance in BlockSim, also appears in the [[Introduction_to_Repairable_Systems#Preventive_Maintenance_2|System Analysis Reference]] book.&#039;&#039;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
Preventive maintenance (PM) is a schedule of planned maintenance actions aimed at the prevention of breakdowns and failures.  The primary goal of preventive maintenance is to prevent the failure of equipment before it actually occurs.  It is designed to preserve and enhance equipment reliability by replacing worn components before they actually fail.  Preventive maintenance activities include equipment checks, partial or complete overhauls at specified periods, oil changes, lubrication and so on.  In addition, workers can record equipment deterioration so they know to replace or repair worn parts before they cause system failure.  Recent technological advances in tools for inspection and diagnosis have enabled even more accurate and effective equipment maintenance.  The ideal preventive maintenance program would prevent all equipment failure before it occurs.&lt;br /&gt;
&lt;br /&gt;
===Value of Preventive Maintenance===&lt;br /&gt;
&lt;br /&gt;
There are multiple misconceptions about preventive maintenance.  One such misconception is that PM is unduly costly.  This logic dictates that it would cost more for regularly scheduled downtime and maintenance than it would normally cost to operate equipment until repair is absolutely necessary.  This may be true for some components; however, one should compare not only the costs but the long-term benefits and savings associated with preventive maintenance.  Without preventive maintenance, for example, costs for lost production time from unscheduled equipment breakdown will be incurred.  Also, preventive maintenance will result in savings due to an increase of effective system service life.&lt;br /&gt;
&lt;br /&gt;
Long-term benefits of preventive maintenance include:&lt;br /&gt;
&lt;br /&gt;
:•	Improved system reliability.&amp;lt;br&amp;gt;&lt;br /&gt;
:•	Decreased cost of replacement.&amp;lt;br&amp;gt;&lt;br /&gt;
:•	Decreased system downtime.&amp;lt;br&amp;gt;&lt;br /&gt;
:•	Better spares inventory management.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Long-term effects and cost comparisons usually favor preventive maintenance over performing maintenance actions only when the system fails.&lt;br /&gt;
&lt;br /&gt;
====When Does Preventive Maintenance Make Sense?====&lt;br /&gt;
&lt;br /&gt;
Preventive maintenance is a logical choice if, and only if, the following two conditions are met:&lt;br /&gt;
&lt;br /&gt;
*Condition #1: The component in question has an increasing failure rate.  In other words, the failure rate of the component increases with time, implying wear-out.  Preventive maintenance of a component that is assumed to have an exponential distribution (which implies a constant failure rate) does not make sense!&amp;lt;br&amp;gt;&lt;br /&gt;
*Condition #2: The overall cost of the preventive maintenance action must be less than the overall cost of a corrective action. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If both of these conditions are met, then preventive maintenance makes sense.  Additionally, based on the costs ratios, an optimum time for such action can be easily computed for a single component.  This is detailed in later sections.&lt;br /&gt;
&lt;br /&gt;
====The Fallacy of &amp;quot;Constant Failure Rate&amp;quot; and &amp;quot;Preventive Replacement&amp;quot;====&lt;br /&gt;
&lt;br /&gt;
Even though we alluded to the fact in the last section, it is important to make it explicitly clear that if a component has a constant failure rate (i.e., defined by an exponential distribution), then preventive maintenance of the component will have no effect on the component&#039;s failure occurrences.  To illustrate this, consider a component with an &amp;lt;math&amp;gt;MTTF\,\!&amp;lt;/math&amp;gt; = &amp;lt;math&amp;gt;100\,\!&amp;lt;/math&amp;gt; hours, or &amp;lt;math&amp;gt;\lambda =0.01\,\!&amp;lt;/math&amp;gt;, and with preventive replacement every 50 hours.  The reliability vs. time graph for this case is illustrated in the following figure, where the component is replaced every 50 hours, thereby resetting the component&#039;s reliability to one.  At first glance, it may seem that the preventive maintenance action is actually maintaining the component at a higher reliability.  &lt;br /&gt;
&lt;br /&gt;
[[Image:7i2.png|center|500px|Reliability vs. time for a single component with an &amp;lt;math&amp;gt;MTTF =100\,\!&amp;lt;/math&amp;gt; hours, or &amp;lt;math&amp;gt;\lambda =0.01\,\!&amp;lt;/math&amp;gt;, and with preventive replacement every 50 hours.|link=]]&lt;br /&gt;
&lt;br /&gt;
However, consider the following cases for a single component: &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Case 1&#039;&#039;&#039;: The component&#039;s reliability from 0 to 60 hours:&lt;br /&gt;
&lt;br /&gt;
*With preventive maintenance, the component was replaced with a new one at 50 hours so the overall reliability is based on the reliability of the new component for 10 hours, &amp;lt;math&amp;gt;R(t=10)=90.48%\,\!&amp;lt;/math&amp;gt;, times the reliability of the previous component, &amp;lt;math&amp;gt;R(t=50)=60.65%\,\!&amp;lt;/math&amp;gt;. The result is &amp;lt;math&amp;gt;R(t=60)=54.88%.\,\!&amp;lt;/math&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
*Without preventive maintenance, the reliability would be the reliability of the same component operating to 60 hours, or &amp;lt;math&amp;gt;R(t=60)=54.88%\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Case 2&#039;&#039;&#039;: The component&#039;s reliability from 50 to 60 hours:&lt;br /&gt;
*With preventive maintenance, the component was replaced at 50 hours, so this is solely based on the reliability of the new component for a mission of 10 hours, or &amp;lt;math&amp;gt;R(t=10)=90.48%\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
*Without preventive maintenance, the reliability would be the conditional reliability of the same component operating to 60 hours, having already survived to 50 hours, or &amp;lt;math&amp;gt;{{R}_{C}}(t=10|T=50)=R(60)/R(50)=90.48%\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As can be seen, both cases — with and without preventive maintenance — yield the same results.&lt;br /&gt;
&lt;br /&gt;
===Determining Preventive Replacement Time===&lt;br /&gt;
&lt;br /&gt;
As mentioned earlier, if the component has an increasing failure rate, then a carefully designed preventive maintenance program is beneficial to system availability.  Otherwise, the costs of preventive maintenance might actually outweigh the benefits.  The objective of a good preventive maintenance program is to either minimize the overall costs (or downtime, etc.) or meet a reliability objective.  In order to achieve this, an appropriate interval (time) for scheduled maintenance must be determined.  One way to do that is to use the optimum age replacement model, as presented next.  The model adheres to the conditions discussed previously:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
:•	The component is exhibiting behavior associated with a wear-out mode.  That is, the failure rate of the component is increasing with time.&amp;lt;br&amp;gt;&lt;br /&gt;
:•	The cost for planned replacements is significantly less than the cost for unplanned replacements.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following figure shows the Cost Per Unit Time vs. Time plot and it can be seen that the corrective replacement costs increase as the replacement interval increases.  In other words, the less often you perform a PM action, the higher your corrective costs will be.  Obviously, as we let a component operate for longer times, its failure rate increases to a point that it is more likely to fail, thus requiring more corrective actions.  The opposite is true for the preventive replacement costs.  The longer you wait to perform a PM, the less the costs; if you do PM too often, the costs increase.  If we combine both costs, we can see that there is an optimum point that minimizes the costs.  In other words, one must strike a balance between the risk (costs) associated with a failure while maximizing the time between PM actions.  &lt;br /&gt;
&lt;br /&gt;
[[Image:costpertime.png|center|500px|Cost curves for preventive and corrective replacement.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Optimum Age Replacement Policy===&lt;br /&gt;
&lt;br /&gt;
To determine the optimum time for such a preventive maintenance action (replacement), we need to mathematically formulate a model that describes the associated costs and risks.  In developing the model, it is assumed that if the unit fails before time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, a corrective action will occur and if it does not fail by time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, a preventive action will occur.  In other words, the unit is replaced upon failure or after a time of operation, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, whichever occurs first.  &lt;br /&gt;
Thus, the optimum replacement time can be found by minimizing the cost per unit time, &amp;lt;math&amp;gt;CPUT\left( t \right).\,\!&amp;lt;/math&amp;gt;  &amp;lt;math&amp;gt;CPUT\left( t \right)\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
CPUT\left( t \right)= &amp;amp; \frac{\text{Total Expected Replacement Cost per Cycle}}{\text{Expected Cycle Length}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{C}_{P}}\cdot R\left( t \right)+{{C}_{U}}\cdot \left[ 1-R\left( t \right) \right]}{\int_{0}^{t}R\left( s \right)ds}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
:*	 &amp;lt;math&amp;gt;R(t)\,\!&amp;lt;/math&amp;gt; = reliability at time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
:*	 &amp;lt;math&amp;gt;{{C}_{P}}\,\!&amp;lt;/math&amp;gt; = cost of planned replacement.&amp;lt;br&amp;gt;&lt;br /&gt;
:*	 &amp;lt;math&amp;gt;{{C}_{U}}\,\!&amp;lt;/math&amp;gt; = cost of unplanned replacement.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The optimum replacement time interval, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, is the time that minimizes &amp;lt;math&amp;gt;CPUT\left( t \right).\,\!&amp;lt;/math&amp;gt;  This can be found by solving for &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; such that: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial \left[ CPUT(t) \right]}{\partial t}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or by solving for a &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; that satisfies the following equation:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{\partial \left[ \tfrac{{{C}_{P}}\cdot R\left( t \right)+{{C}_{U}}\cdot \left[ 1-R\left( t \right) \right]}{\int_{0}^{t}R\left( s \right)ds} \right]}{\partial t}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Interested readers can refer to Barlow and Hunter [[Appendix_B:_References |[2]]] for more details on this model.&lt;br /&gt;
&lt;br /&gt;
In BlockSim (Version 8 and above), you can use the Optimum Replacement window to determine the optimum replacement time either for an individual block or for multiple blocks in a diagram simultaneously. When working with multiple blocks, the calculations can be for individual blocks or for one or more groups of blocks. For each item that is included in the optimization calculations, you will need to specify the cost for a planned replacement and the cost for an unplanned replacement. This is done by calculating the costs for replacement based on the item settings using equations or simulation and then, if desired, manually entering any additional costs for either type of replacement in the corresponding columns of the table.&lt;br /&gt;
&lt;br /&gt;
The equations used to calculate the costs of planned and unplanned tasks for each item based on its associated URD are as follows:&lt;br /&gt;
&lt;br /&gt;
*For the cost of planned tasks, here denoted as PM cost:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{PM Cost}= \left(\text{PM Down Time Rate}+ \text{Block Level Down Time Rate} \right) \cdot \left( \text{MTTPM}+\text{Pool Delay} +\text{Crew Delay} \right) \\&lt;br /&gt;
+ \text{Crew Labor Rate} \cdot \text{MTTPM} + \text{Cost per PM} + \text{Cost per Pool} +\text{Cost per Crew}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Only PM tasks based on item age or system age (fixed or dynamic intervals) are considered. If there is more than one PM task based on item age, only the first one is considered. &lt;br /&gt;
*For the cost of the unplanned task, here denoted as CM cost:&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\text{CM Cost}= &amp;amp; \left(\text{CM Down Time Rate}+ \text{Block Level Down Time Rate} \right) \cdot \left( \text{MTTR}+\text{Pool Delay} +\text{Crew Delay} \right) \\     &lt;br /&gt;
 &amp;amp;  + \text{Crew Labor Rate} \cdot \text{MTTR} + \text{Cost per CM} + \text{Cost per Pool} +\text{Cost per Crew} +\text{Block Level Cost per Failure}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using simulation, for costs associated with planned replacements, all preventive tasks based on item age or system age (fixed or dynamic intervals) are considered. Because each item is simulated as a system (i.e., in isolation from any other item), tasks triggered in other ways are not considered.&lt;br /&gt;
&lt;br /&gt;
===Example: Optimum Replacement Time===&lt;br /&gt;
{{:Optimum_Replacement_Time_Example}}&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Temperature-NonThermal_Relationship&amp;diff=64934</id>
		<title>Temperature-NonThermal Relationship</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Temperature-NonThermal_Relationship&amp;diff=64934"/>
		<updated>2017-02-08T21:15:02Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional Reliability Function */ changed R(T,t,U,V) to R((t|T),U,V)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|8}}&lt;br /&gt;
When temperature and a second non-thermal stress (e.g., voltage) are the accelerated stresses of a test, then the Arrhenius and the inverse power law relationships can be combined to yield the Temperature-NonThermal (T-NT) relationship. This relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(U,V)=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; is the non-thermal stress (i.e., voltage, vibration, etc.)&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; is the temperature (&#039;&#039;&#039;in absolute units&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are the parameters to be determined.&lt;br /&gt;
&lt;br /&gt;
The T-NT relationship can be linearized and plotted on a Life vs. Stress plot. The relationship is linearized by taking the natural logarithm of both sides in the T-NT relationship or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln (L(V,U))=\ln (C)-n\ln (U)+\frac{B}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since life is now a function of two stresses, a Life vs. Stress plot can only be obtained by keeping one of the two stresses constant and varying the other one. Doing so will yield the straight line described by the above equation, where the term for the stress which is kept at a fixed value becomes another constant (in addition to the &amp;lt;math&amp;gt;\ln (C)\,\!&amp;lt;/math&amp;gt; constant).&lt;br /&gt;
When the non-thermal stress is kept constant, then the linearized T-NT relationship becomes: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln (L(V))=const.+\frac{B}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the Arrhenius equation and it is plotted on a log-reciprocal scale.&lt;br /&gt;
When the thermal stress is kept constant, then the linearized T-NT relationship becomes: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\ln (L(U))=const.-n\ln (U)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the inverse power law equation and it is plotted on a log-log scale.&lt;br /&gt;
In the next two figures, data obtained from a temperature and voltage test were analyzed and plotted on a log-reciprocal scale. In the first figure, life is plotted versus temperature, with voltage held at a fixed value. In the second figure, life is plotted versus voltage, with temperature held at a fixed value.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA10.1.png|400px|center|Life vs. Temperature (Arrhenius plot) at a fixed voltage level.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA10.2.png|400px|center|Life vs. Voltage plot at a fixed temperature level.]]&lt;br /&gt;
&lt;br /&gt;
===A look at the Parameters &#039;&#039;B&#039;&#039; and &#039;&#039;n&#039;&#039;===&lt;br /&gt;
Depending on which stress type is kept constant, it can be seen from the linearized T-NT relationship that either the parameter &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; or the parameter &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the slope of the resulting line. If, for example, the non-thermal stress is kept constant then &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the slope of the life line in a Life vs. Temperature plot. The steeper the slope, the greater the dependency of the product&#039;s life to the temperature. In other words, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is a measure of the effect that temperature has on the life and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is a measure of the effect that the non-thermal stress has on the life. The larger the value of &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; the higher the dependency of the life on the temperature. Similarly, the larger the value of &amp;lt;math&amp;gt;n,\,\!&amp;lt;/math&amp;gt; the higher the dependency of the life on the non-thermal stress.&lt;br /&gt;
&lt;br /&gt;
===Acceleration Factor===&lt;br /&gt;
The acceleration factor for the T-NT relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{F}}=\frac{{{L}_{USE}}}{{{L}_{Accelerated}}}=\frac{\tfrac{C}{U_{u}^{n}}{{e}^{\tfrac{B}{{{V}_{u}}}}}}{\tfrac{C}{U_{A}^{n}}{{e}^{\tfrac{B}{{{V}_{A}}}}}}={{\left( \frac{{{U}_{A}}}{{{U}_{u}}} \right)}^{n}}{{e}^{B\left( \tfrac{1}{{{V}_{u}}}-\tfrac{1}{{{V}_{A}}} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{L}_{USE}}\,\!&amp;lt;/math&amp;gt; is the life at use stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{L}_{Accelerated}}\,\!&amp;lt;/math&amp;gt; is the life at the accelerated stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{u}}\,\!&amp;lt;/math&amp;gt; is the use temperature level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{A}}\,\!&amp;lt;/math&amp;gt; is the accelerated temperature level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{A}}\,\!&amp;lt;/math&amp;gt; is the accelerated non-thermal level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{u}}\,\!&amp;lt;/math&amp;gt; is the use non-thermal level.&lt;br /&gt;
&lt;br /&gt;
The acceleration factor is plotted versus stress in the same manner used to create the Life vs. Stress plots. That is, one stress type is kept constant and the other is varied.&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \overline{T}= &amp;amp; \int\limits_{0}^{\infty }t\cdot f(t,U,V)dt = &amp;amp; \int\limits_{0}^{\infty }t\cdot \frac{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}{{e}^{-\tfrac{t\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}dt = &amp;amp; \frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA10.3.png|center|450px|Acceleration Factor vs. Temperature at a fixed voltage level.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA10.4.png|center|450px|Acceleration Factor vs. Voltage at a fixed temperature level.]]&lt;br /&gt;
&lt;br /&gt;
=T-NT Exponential=&lt;br /&gt;
By setting &amp;lt;math&amp;gt;m=L(U,V)\,\!&amp;lt;/math&amp;gt;, the exponential &#039;&#039;pdf&#039;&#039; becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,U,V)=\frac{{{U}^{n}}}{C}{{e}^{-\tfrac{B}{V}}}\cdot {{e}^{-\tfrac{{{U}^{n}}}{C}\left( {{e}^{-\tfrac{B}{V}}} \right)t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T-NT Exponential Statistical Properties Summary==&lt;br /&gt;
===Mean or MTTF===&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T},\,\!&amp;lt;/math&amp;gt; or Mean Time To Failure (MTTF) for the T-NT exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \overline{T}= &amp;amp; \int\limits_{0}^{\infty }t\cdot f(t,U,V)dt = &amp;amp; \int\limits_{0}^{\infty }t\cdot \frac{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}{{e}^{-\tfrac{t\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}dt = &amp;amp; \frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Median===&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; for the T-NT exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=\frac{1}{\lambda }0.693=0.693\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode===&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; for the T-NT exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Standard Deviation===&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, for the T-NT exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=\frac{1}{\lambda }=m=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-NT Exponential Reliability Function===&lt;br /&gt;
The T-NT exponential reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,U,V)={{e}^{-\tfrac{T\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function is the complement of the T-NT exponential cumulative distribution function or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,U,V)=1-Q(T,U,V)=1-\int_{0}^{T}f(T)dT\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,U,V)=1-\int_{0}^{T}\frac{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}{{e}^{-\tfrac{T\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}dT={{e}^{-\tfrac{T\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability===&lt;br /&gt;
The conditional reliability function for the T-NT exponential model is given by,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),U,V)=\frac{R(T+t,U,V)}{R(T,U,V)}=\frac{{{e}^{-\lambda (T+t)}}}{{{e}^{-\lambda T}}}={{e}^{-\tfrac{t\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
&lt;br /&gt;
For the T-NT exponential model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({{t}_{R}},U,V)={{e}^{-\tfrac{{{t}_{R}}\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln [R({{t}_{R}},U,V)]{{=}^{-\tfrac{{{t}_{R}}\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=-\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}\ln [R({{t}_{R}},U,V)]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
Substituting the T-NT relationship into the exponential log-likelihood equation yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{U_{i}^{n}}{C}{{e}^{-\tfrac{B}{{{V}_{i}}}}}\cdot {{e}^{-\tfrac{U_{i}^{n}}{C}\left( {{e}^{-\tfrac{B}{{{V}_{i}}}}} \right){{T}_{i}}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{U_{i}^{n}}{C}\left( {{e}^{-\tfrac{B}{{{V}_{i}}}}} \right)T_{i}^{\prime }+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-\tfrac{T_{Li}^{\prime \prime }}{C}U_{i}^{\prime \prime n}{{e}^{-\tfrac{B}{{{V}_{i}}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-\tfrac{T_{Ri}^{\prime \prime }}{C}U_{i}^{\prime \prime n}{{e}^{-\tfrac{B}{{{V}_{i}}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the T-NT parameter (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the second T-NT parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the third T-NT parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the temperature level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{i}}\,\!&amp;lt;/math&amp;gt; is the non-thermal stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial C}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=T-NT Weibull=&lt;br /&gt;
By setting &amp;lt;math&amp;gt;\eta =L(U,V)\,\!&amp;lt;/math&amp;gt;, the T-NT Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,U,V)=\frac{\beta {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}{{\left( \frac{t\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{t\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T-NT Weibull Statistical Properties Summary==&lt;br /&gt;
===Mean or MTTF===&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T}\,\!&amp;lt;/math&amp;gt;, for the T-NT Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}\cdot \Gamma \left( \frac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Gamma \left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt; is the gamma function evaluated at the value of &amp;lt;math&amp;gt;\left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
===Median===&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; for the T-NT Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{{\left( \ln 2 \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode===&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; for the T-NT Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{{\left( 1-\frac{1}{\beta } \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Standard Deviation===&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}},\,\!&amp;lt;/math&amp;gt; for the T-NT Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}\cdot \sqrt{\Gamma \left( \frac{2}{\beta }+1 \right)-{{\left( \Gamma \left( \frac{1}{\beta }+1 \right) \right)}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-NT Weibull Reliability Function===&lt;br /&gt;
The T-NT Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,U,V)={{e}^{-{{\left( \tfrac{T{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability Function===&lt;br /&gt;
The T-NT Weibull conditional reliability function at a specified stress level is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),U,V)=\frac{R(T+t,U,V)}{R(T,U,V)}=\frac{{{e}^{-{{\left( \tfrac{\left( T+t \right){{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta }}}}}{{{e}^{-{{\left( \tfrac{T{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),U,V)={{e}^{-\left[ {{\left( \tfrac{\left( T+t \right){{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta }}-{{\left( \tfrac{T{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta }} \right]}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
For the T-NT Weibull model, the reliable life, &amp;lt;math&amp;gt;{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, of a unit for a specified reliability and starting the mission at age zero is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{T}_{R}}=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{{\left\{ -\ln \left[ R\left( {{T}_{R}},U,V \right) \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-NT Weibull Failure Rate Function===&lt;br /&gt;
The T-NT Weibull failure rate function, &amp;lt;math&amp;gt;\lambda (T)\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda \left( T,U,V \right)=\frac{f\left( T,U,V \right)}{R\left( T,U,V \right)}=\frac{\beta {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}{{\left( \frac{T{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
Substituting the T-NT relationship into the Weibull log-likelihood function yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{\beta U_{i}^{n}{{e}^{-\tfrac{B}{{{V}_{i}}}}}}{C}{{\left( \frac{U_{i}^{n}{{e}^{-\tfrac{B}{{{V}_{i}}}}}}{C}{{T}_{i}} \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{U_{i}^{n}{{e}^{-\tfrac{B}{{{V}_{i}}}}}}{C}{{T}_{i}} \right)}^{\beta }}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( \frac{U_{i}^{n}{{e}^{-\tfrac{B}{{{V}_{i}}}}}}{C}T_{i}^{\prime } \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-{{\left( \tfrac{T_{Li}^{\prime \prime }}{C}U_{i}^{\prime \prime n}{{e}^{-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-{{\left( \tfrac{T_{Ri}^{\prime \prime }}{C}U_{i}^{\prime \prime n}{{e}^{-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter (unknown, the first of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the first T-NT parameter (unknown, the second of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the second T-NT parameter (unknown, the third of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the third T-NT parameter (unknown, the fourth of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the temperature level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{i}}\,\!&amp;lt;/math&amp;gt; is the non-thermal stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial C}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \beta }=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=T-NT Lognormal=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Temperature-Nonthermal_Relationship_Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the lognormal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\overline{{{T}&#039;}}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{T}&#039;=\ln (T)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T=\,\!&amp;lt;/math&amp;gt; times-to-failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\overline{{{T}&#039;}}=\,\!&amp;lt;/math&amp;gt; mean of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\,\!&amp;lt;/math&amp;gt; standard deviation of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
The median of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}={{e}^{{{\overline{T}}^{\prime }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The T-NT lognormal model &#039;&#039;pdf&#039;&#039; can be obtained by setting &amp;lt;math&amp;gt;\breve{T}=L(V)\,\!&amp;lt;/math&amp;gt;. Therefore: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=L(V)=\frac{C}{{{U}^{n}}}{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{e}^{{{\overline{T}}^{\prime }}}}=\frac{C}{{{U}^{n}}}{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\overline{T}}^{\prime }}=\ln (C)-n\ln (U)+\frac{B}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting the above equation into the lognormal &#039;&#039;pdf&#039;&#039; yields the T-NT lognormal model &#039;&#039;pdf&#039;&#039; or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T,U,V)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (C)+n\ln (U)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T-N-T Lognormal Statistical Properties Summary==&lt;br /&gt;
===The Mean===&lt;br /&gt;
The mean life of the T-NT lognormal model (mean of the times-to-failure), &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \bar{T}= &amp;amp; {{e}^{\bar{{T}&#039;}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}} = &amp;amp; {{e}^{\ln (C)-n\ln (U)+\tfrac{B}{V}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The mean of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\bar{T}}^{^{\prime }}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{T}}^{\prime }}=\ln \left( {\bar{T}} \right)-\frac{1}{2}\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Standard Deviation===&lt;br /&gt;
The standard deviation of the T-NT lognormal model (standard deviation of the times-to-failure), &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma }_{T}}= &amp;amp; \sqrt{\left( {{e}^{2\bar{{T}&#039;}+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)} = &amp;amp; \sqrt{\left( {{e}^{2\left( \ln (C)-n\ln (U)+\tfrac{B}{V} \right)+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The standard deviation of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\sqrt{\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Mode===&lt;br /&gt;
The mode of the T-NT lognormal model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \tilde{T}= &amp;amp; {{e}^{{{\overline{T}}^{\prime }}-\sigma _{{{T}&#039;}}^{2}}} = &amp;amp; {{e}^{\ln (C)-n\ln (U)+\tfrac{B}{V}-\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-NT Lognormal Reliability===&lt;br /&gt;
For the T-NT lognormal model, the reliability for a mission of time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, starting at age 0, for the T-NT lognormal model is determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,U,V)=\int_{T}^{\infty }f(t,U,V)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,U,V)=\int_{{{T}^{^{\prime }}}}^{\infty }\frac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\ln (C)+n\ln (U)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
For the T-NT lognormal model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is estimated by first solving the reliability equation with respect to time, as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;T_{R}^{\prime }=\ln (C)-n\ln (U)+\frac{B}{V}+z\cdot {{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z={{\Phi }^{-1}}\left[ F\left( T_{R}^{\prime },U,V \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;,U,V)}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;{T}&#039;=\ln (T)\,\!&amp;lt;/math&amp;gt; the reliable life, &amp;lt;math&amp;gt;{{t}_{R}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}={{e}^{T_{R}^{\prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Lognormal Failure Rate===&lt;br /&gt;
The T-NT lognormal failure rate is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (T,U,V)=\frac{f(T,U,V)}{R(T,U,V)}=\frac{\tfrac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (C)+n\ln (U)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}}{\int_{{{T}&#039;}}^{\infty }\tfrac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (C)+n\ln (U)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
The complete T-NT lognormal log-likelihood function is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{{{\sigma }_{{{T}&#039;}}}{{T}_{i}}}{{\phi }_{pdf}}\left( \frac{\ln \left( {{T}_{i}} \right)-\ln (C)+n\ln ({{U}_{i}})-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] \text{ }+\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\ln \left[ 1-\Phi \left( \frac{\ln \left( T_{i}^{\prime } \right)-\ln (C)+n\ln ({{U}_{i}})-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Ri}^{\prime \prime }=\frac{\ln T_{Ri}^{\prime \prime }-\ln C+n\ln U_{i}^{\prime \prime }-\tfrac{B}{{{V}_{i}}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Li}^{\prime \prime }=\frac{\ln T_{Li}^{\prime \prime }-\ln C+n\ln U_{i}^{\prime \prime }-\tfrac{B}{{{V}_{i}}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\phi \left( x \right)=\frac{1}{\sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( x \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; is the standard deviation of the natural logarithm of the times-to-failure (unknown, the first of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the first T-NT parameter (unknown, the second of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the second T-NT parameter (unknown, the third of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the third T-NT parameter (unknown, the fourth of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level for the first stress type (i.e., temperature) of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level for the second stress type (i.e., non-thermal) of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial C}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===T-NT Lognormal Example===&lt;br /&gt;
{{:Temperature-Nonthermal_Relationship_Example}}&lt;br /&gt;
&lt;br /&gt;
= T-NT Confidence Bounds =&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the T-NT Exponential==&lt;br /&gt;
===Confidence Bounds on the Mean Life===&lt;br /&gt;
The mean life for the T-NT model is given by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt;. The upper &amp;lt;math&amp;gt;({{m}_{U}})\,\!&amp;lt;/math&amp;gt; and lower &amp;lt;math&amp;gt;({{m}_{L}})\,\!&amp;lt;/math&amp;gt; bounds on the mean life (ML estimate of the mean life) are estimated by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{U}}=\widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{L}}=\widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level, then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds, and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds. The variance of &amp;lt;math&amp;gt;\widehat{m}\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{m})= &amp;amp; {{\left( \frac{\partial m}{\partial B} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\partial m}{\partial C} \right)}^{2}}Var(\widehat{C}) +{{\left( \frac{\partial m}{\partial n} \right)}^{2}}Var(\widehat{b}) +2\left( \frac{\partial m}{\partial B} \right)\left( \frac{\partial m}{\partial C} \right)Cov(\widehat{B},\widehat{C}) \\ &lt;br /&gt;
 &amp;amp;  +2\left( \frac{\partial m}{\partial B} \right)\left( \frac{\partial m}{\partial n} \right)Cov(\widehat{B},\widehat{n}) +2\left( \frac{\partial m}{\partial C} \right)\left( \frac{\partial m}{\partial n} \right)Cov(\widehat{C},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{m})= &amp;amp; \frac{1}{{{U}^{2\widehat{n}}}}{{e}^{2\tfrac{\widehat{B}}{V}}}[\frac{{{\widehat{C}}^{2}}}{{{V}^{2}}}Var(\widehat{B})+Var(\widehat{C}) +{{\widehat{C}}^{2}}{{\left( \ln (U) \right)}^{2}}Var(\widehat{n}) +\frac{2\widehat{C}}{V}Cov(\widehat{B},\widehat{C}) \\ &lt;br /&gt;
 &amp;amp;  -\frac{2{{\widehat{C}}^{2}}\ln (U)}{V}Cov(\widehat{B},\widehat{n}) -2\widehat{C}\ln (U)Cov(\widehat{C},\widehat{n})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariance of &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{B}) &amp;amp; Cov(\widehat{B},\widehat{C}) &amp;amp; Cov(\widehat{B},\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{C},\widehat{B}) &amp;amp; Var(\widehat{C}) &amp;amp; Cov(\widehat{C},\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{n},\widehat{B}) &amp;amp; Cov(\widehat{n},\widehat{C}) &amp;amp; Var(\widehat{n})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ F \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F=\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{C}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{n}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right].\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The bounds on reliability at a given time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, are estimated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{U}}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{L}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time for a given reliability (ML estimate of time) are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{T}=-\widehat{m}\cdot \ln (R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding confidence bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; -{{m}_{U}}\cdot \ln (R) \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; -{{m}_{L}}\cdot \ln (R)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the T-NT Weibull==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
Using the same approach as previously discussed ( &amp;lt;math&amp;gt;\widehat{\beta }\,\!&amp;lt;/math&amp;gt; and &lt;br /&gt;
&amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; positive parameters): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\beta }_{U}}= &amp;amp; \widehat{\beta }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{\beta }_{L}}= &amp;amp; \widehat{\beta }\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{B}_{U}}= &amp;amp; \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})} \\ &lt;br /&gt;
 &amp;amp; {{B}_{L}}= &amp;amp; \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{A})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{C}_{U}}= &amp;amp; \widehat{C}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}} \\ &lt;br /&gt;
 &amp;amp; {{C}_{L}}= &amp;amp; \widehat{C}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{n}_{U}}= &amp;amp; \widehat{n}+{{K}_{\alpha }}\sqrt{Var(\widehat{n})} \\ &lt;br /&gt;
 &amp;amp; {{n}_{L}}= &amp;amp; \widehat{n}-{{K}_{\alpha }}\sqrt{Var(\widehat{n})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are estimated from the Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{\beta }) &amp;amp; Cov(\widehat{\beta },\widehat{B}) &amp;amp; Cov(\widehat{\beta },\widehat{C}) &amp;amp; Cov(\widehat{\beta },\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{B},\widehat{\beta }) &amp;amp; Var(\widehat{B}) &amp;amp; Cov(\widehat{B},\widehat{C}) &amp;amp; Cov(\widehat{B},\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{C},\widehat{\beta }) &amp;amp; Cov(\widehat{C},\widehat{B}) &amp;amp; Var(\widehat{C}) &amp;amp; Cov(\widehat{C},\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{n},\widehat{\beta }) &amp;amp; Cov(\widehat{n},\widehat{B}) &amp;amp; Cov(\widehat{n},\widehat{C}) &amp;amp; Var(\widehat{n})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ F \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F=\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\beta }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{C}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{n}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The reliability function (ML estimate) for the T-NT Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,U,V)={{e}^{-{{\left( \tfrac{{{U}^{\widehat{n}}}{{e}^{-\tfrac{\widehat{B}}{V}}}}{\widehat{C}}T \right)}^{\widehat{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,U,V)={{e}^{-{{e}^{\ln \left[ {{\left( \tfrac{{{U}^{\widehat{n}}}{{e}^{-\tfrac{\widehat{B}}{V}}}}{\widehat{C}}T \right)}^{\widehat{\beta }}} \right]}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\ln \left[ {{\left( \frac{{{U}^{\widehat{n}}}{{e}^{-\tfrac{\widehat{B}}{V}}}}{\widehat{C}}T \right)}^{\widehat{\beta }}} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\widehat{\beta }\left[ \ln (T)-\frac{\widehat{B}}{V}-\ln (\widehat{C})+\widehat{n}\ln (U) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,U,V)={{e}^{-e\widehat{^{u}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to find the upper and lower bounds on &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial B} \right)}^{2}}Var(\widehat{B}) +{{\left( \frac{\partial \widehat{u}}{\partial C} \right)}^{2}}Var(\widehat{C})+{{\left( \frac{\partial \widehat{u}}{\partial n} \right)}^{2}}Var(\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{\beta },\widehat{B}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{\beta },\widehat{C}) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{\beta },\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial B} \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{B},\widehat{C}) +2\left( \frac{\partial \widehat{u}}{\partial B} \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{B},\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial C} \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{C},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; {{\left( \frac{\widehat{u}}{\widehat{\beta }} \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\widehat{\beta }}{V} \right)}^{2}}Var(\widehat{B}) +{{\left( \frac{\widehat{\beta }}{\widehat{C}} \right)}^{2}}Var(\widehat{C})+{{\left( \widehat{\beta }\ln (U) \right)}^{2}}Var(\widehat{n}) -\frac{2\widehat{u}}{V}Cov(\widehat{\beta },\widehat{B})-\frac{2\widehat{u}}{\widehat{C}}Cov(\widehat{\beta },\widehat{C}) \\ &lt;br /&gt;
 &amp;amp; +2\widehat{u}\ln (U)Cov(\widehat{\beta },\widehat{n}) +\frac{2{{\widehat{\beta }}^{2}}}{\widehat{C}V}Cov(\widehat{B},\widehat{C})-\frac{2{{\widehat{\beta }}^{2}}\ln (U)}{V}Cov(\widehat{B},\widehat{n}) -\frac{2{{\widehat{\beta }}^{2}}\ln (U)}{\widehat{C}}Cov(\widehat{C},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{L}} \right)}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{U}} \right)}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (R)=\ &amp;amp; -{{\left( \frac{{{U}^{\widehat{n}}}{{e}^{-\tfrac{\widehat{B}}{V}}}}{\widehat{C}}\widehat{T} \right)}^{\widehat{\beta }}} \\ &lt;br /&gt;
 \ln (-\ln (R))=\ &amp;amp; \widehat{\beta }\left( \ln (\widehat{T})-\frac{\widehat{B}}{V}-\ln (\widehat{C})+\widehat{n}\ln (U) \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\frac{1}{\widehat{\beta }}\ln (-\ln (R))+\frac{\widehat{B}}{V}+\ln (\widehat{C})-\widehat{n}\ln (U)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\widehat{u}=\ln \widehat{T}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial B} \right)}^{2}}Var(\widehat{B}) +{{\left( \frac{\partial \widehat{u}}{\partial C} \right)}^{2}}Var(\widehat{C})+{{\left( \frac{\partial \widehat{u}}{\partial n} \right)}^{2}}Var(\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{\beta },\widehat{B}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{\beta },\widehat{C}) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{\beta },\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial B} \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{B},\widehat{C}) +2\left( \frac{\partial \widehat{u}}{\partial B} \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{B},\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial C} \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{C},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; \frac{1}{{{\widehat{\beta }}^{4}}}{{\left[ \ln (-\ln (R)) \right]}^{2}}Var(\widehat{\beta }) +\frac{1}{{{V}^{2}}}Var(\widehat{B})+\frac{1}{{{\widehat{C}}^{2}}}Var(\widehat{C})+{{\left[ \ln (U) \right]}^{2}}Var(\widehat{n}) -\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}V}Cov(\widehat{\beta },\widehat{B}) -\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}\widehat{C}}Cov(\widehat{\beta },\widehat{C}) \\ &lt;br /&gt;
 &amp;amp; +\frac{2\ln (-\ln (R))\ln (U)}{{{\widehat{\beta }}^{2}}}Cov(\widehat{\beta },\widehat{n}) +\frac{2}{\widehat{C}V}Cov(\widehat{B},\widehat{C}) -\frac{2\ln (U)}{V}Cov(\widehat{B},\widehat{n})-\frac{2\ln (U)}{\widehat{C}}Cov(\widehat{C},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on time are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{{{u}_{U}}}} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{{{u}_{L}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the T-NT Lognormal==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
Since the standard deviation, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; are positive parameters, &amp;lt;math&amp;gt;\ln ({{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\ln (\widehat{C})\,\!&amp;lt;/math&amp;gt; are treated as normally distributed and the bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{\sigma }_{U}}=\ &amp;amp; {{\widehat{\sigma }}_{{{T}&#039;}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}} &amp;amp;\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 {{\sigma }_{L}}=\ &amp;amp; \frac{{{\widehat{\sigma }}_{{{T}&#039;}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}} &amp;amp;\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{C}_{U}}= &amp;amp; \widehat{C}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}} &amp;amp;\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 {{C}_{L}}= &amp;amp; \frac{\widehat{A}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}}} &amp;amp;\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The lower and upper bounds on &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{B}_{U}}= &amp;amp; \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; {{B}_{L}}= &amp;amp; \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{B})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{n}_{U}}= &amp;amp; \widehat{n}+{{K}_{\alpha }}\sqrt{Var(\widehat{n})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; {{n}_{L}}= &amp;amp; \widehat{n}-{{K}_{\alpha }}\sqrt{Var(\widehat{n})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;C,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left( \begin{matrix}&lt;br /&gt;
   Var\left( {{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{B} \right) &amp;amp; Var\left( \widehat{B} \right) &amp;amp; Cov\left( \widehat{B},\widehat{C} \right) &amp;amp; Cov\left( \widehat{B},\widehat{n} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{C} \right) &amp;amp; Cov\left( \widehat{C},\widehat{B} \right) &amp;amp; Var\left( \widehat{C} \right) &amp;amp; Cov\left( \widehat{C},\widehat{n} \right)  \\&lt;br /&gt;
   Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{n},\widehat{B} \right) &amp;amp; Cov\left( \widehat{n},\widehat{C} \right) &amp;amp; Var\left( \widehat{n} \right)  \\&lt;br /&gt;
\end{matrix} \right)={{\left[ F \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F=\left( \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma _{{{T}&#039;}}^{2}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{C}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{n}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bounds on Reliability===&lt;br /&gt;
The reliability of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({T}&#039;,U,V;B,C,n,{{\sigma }_{{{T}&#039;}}})=\int_{{{T}&#039;}}^{\infty }\frac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\ln (\widehat{C})+\widehat{n}\ln ({{U}_{i}})-\tfrac{\widehat{B}}{{{V}_{i}}}}{{{\widehat{\sigma }}_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\widehat{z}(t,U,V;B,C,n,{{\sigma }_{T}})=\tfrac{t-\ln (\widehat{C})+\widehat{n}\ln (U)-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}},\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;\tfrac{d\widehat{z}}{dt}=\tfrac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
For &amp;lt;math&amp;gt;t={T}&#039;\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{z}=\tfrac{{T}&#039;-\ln (\widehat{C})+\widehat{n}\ln (U)-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}\,\!&amp;lt;/math&amp;gt;, and for &amp;lt;math&amp;gt;t=\infty ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{z}=\infty .\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The above equation then becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(\widehat{z})=\int_{\widehat{z}({T}&#039;,U,V)}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The bounds on &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{z}_{U}}= &amp;amp; \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ &lt;br /&gt;
 &amp;amp; {{z}_{L}}= &amp;amp; \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{z})= &amp;amp; \left( \frac{\partial \widehat{z}}{\partial B} \right)_{\widehat{B}}^{2}Var(\widehat{B})+\left( \frac{\partial \widehat{z}}{\partial C} \right)_{\widehat{C}}^{2}Var(\widehat{C}) +\left( \frac{\partial \widehat{z}}{\partial n} \right)_{\widehat{b}}^{2}Var(\widehat{n})+\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)_{{{\widehat{\sigma }}_{{{T}&#039;}}}}^{2}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}{{\left( \frac{\partial \widehat{z}}{\partial C} \right)}_{\widehat{C}}}Cov\left( \widehat{B},\widehat{C} \right) \\ &lt;br /&gt;
 &amp;amp; +2{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}{{\left( \frac{\partial \widehat{z}}{\partial b} \right)}_{\widehat{n}}}Cov\left( \widehat{B},\widehat{n} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial C} \right)}_{\widehat{C}}}{{\left( \frac{\partial \widehat{z}}{\partial n} \right)}_{\widehat{n}}}Cov\left( \widehat{C},\widehat{n} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) \\&lt;br /&gt;
&amp;amp; +2{{\left( \frac{\partial \widehat{z}}{\partial C} \right)}_{\widehat{C}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial n} \right)}_{\widehat{n}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{z})= &amp;amp; \frac{1}{\widehat{\sigma }_{{{T}&#039;}}^{2}}[\frac{1}{{{V}^{2}}}Var(\widehat{B})+\frac{1}{{{C}^{2}}}Var(\widehat{C})+\ln {{(U)}^{2}}Var(\widehat{n})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2}{C\cdot V}Cov\left( \widehat{B},\widehat{C} \right)-\frac{2\ln (U)}{V}Cov\left( \widehat{B},\widehat{n} \right) \\ &lt;br /&gt;
 &amp;amp; -\frac{2\ln (U)}{C}Cov\left( \widehat{C},\widehat{n} \right)+\frac{2\widehat{z}}{V}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +\frac{2\widehat{z}}{C}Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)-2\widehat{z}\ln (U)Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds around time for a given lognormal percentile (unreliability) are estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{T}&#039;(U,V;\widehat{B},\widehat{C},\widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}})=\ln (\widehat{C})+\widehat{n}\ln (U)-\frac{\widehat{B}}{V}+z\cdot {{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {T}&#039;(U,V;\widehat{A},\widehat{\phi },\widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}})=\ &amp;amp; \ln (T) \\ &lt;br /&gt;
 z=\ &amp;amp; {{\Phi }^{-1}}\left[ F({T}&#039;) \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;,U,V)}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to calculate the variance of &amp;lt;math&amp;gt;{T}&#039;(U,V;\widehat{B},\widehat{C},\widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var({T}&#039;)= &amp;amp; {{\left( \frac{\partial {T}&#039;}{\partial B} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\partial {T}&#039;}{\partial C} \right)}^{2}}Var(\widehat{C}) +{{\left( \frac{\partial {T}&#039;}{\partial n} \right)}^{2}}Var(\widehat{n})+{{\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2\left( \frac{\partial {T}&#039;}{\partial B} \right)\left( \frac{\partial {T}&#039;}{\partial C} \right)Cov\left( \widehat{B},\widehat{C} \right) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial {T}&#039;}{\partial B} \right)\left( \frac{\partial {T}&#039;}{\partial n} \right)Cov\left( \widehat{B},\widehat{n} \right) +2\left( \frac{\partial {T}&#039;}{\partial C} \right)\left( \frac{\partial {T}&#039;}{\partial n} \right)Cov\left( \widehat{C},\widehat{n} \right) +2\left( \frac{\partial {T}&#039;}{\partial B} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial {T}&#039;}{\partial C} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2\left( \frac{\partial {T}&#039;}{\partial n} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; T_{U}^{\prime }= &amp;amp; \ln {{T}_{U}}={T}&#039;+{{K}_{\alpha }}\sqrt{Var({T}&#039;)} \\ &lt;br /&gt;
 &amp;amp; T_{L}^{\prime }= &amp;amp; \ln {{T}_{L}}={T}&#039;-{{K}_{\alpha }}\sqrt{Var({T}&#039;)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for &amp;lt;math&amp;gt;{{T}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{T}_{L}}\,\!&amp;lt;/math&amp;gt; yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{T_{U}^{\prime }}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{T_{L}^{\prime }}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Temperature-NonThermal_Relationship&amp;diff=64933</id>
		<title>Temperature-NonThermal Relationship</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Temperature-NonThermal_Relationship&amp;diff=64933"/>
		<updated>2017-02-08T21:14:13Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional Reliability */ changed R(T,t,V,U) to R((t|T),V,U)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|8}}&lt;br /&gt;
When temperature and a second non-thermal stress (e.g., voltage) are the accelerated stresses of a test, then the Arrhenius and the inverse power law relationships can be combined to yield the Temperature-NonThermal (T-NT) relationship. This relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(U,V)=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; is the non-thermal stress (i.e., voltage, vibration, etc.)&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; is the temperature (&#039;&#039;&#039;in absolute units&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are the parameters to be determined.&lt;br /&gt;
&lt;br /&gt;
The T-NT relationship can be linearized and plotted on a Life vs. Stress plot. The relationship is linearized by taking the natural logarithm of both sides in the T-NT relationship or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln (L(V,U))=\ln (C)-n\ln (U)+\frac{B}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since life is now a function of two stresses, a Life vs. Stress plot can only be obtained by keeping one of the two stresses constant and varying the other one. Doing so will yield the straight line described by the above equation, where the term for the stress which is kept at a fixed value becomes another constant (in addition to the &amp;lt;math&amp;gt;\ln (C)\,\!&amp;lt;/math&amp;gt; constant).&lt;br /&gt;
When the non-thermal stress is kept constant, then the linearized T-NT relationship becomes: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln (L(V))=const.+\frac{B}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the Arrhenius equation and it is plotted on a log-reciprocal scale.&lt;br /&gt;
When the thermal stress is kept constant, then the linearized T-NT relationship becomes: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\ln (L(U))=const.-n\ln (U)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the inverse power law equation and it is plotted on a log-log scale.&lt;br /&gt;
In the next two figures, data obtained from a temperature and voltage test were analyzed and plotted on a log-reciprocal scale. In the first figure, life is plotted versus temperature, with voltage held at a fixed value. In the second figure, life is plotted versus voltage, with temperature held at a fixed value.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA10.1.png|400px|center|Life vs. Temperature (Arrhenius plot) at a fixed voltage level.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA10.2.png|400px|center|Life vs. Voltage plot at a fixed temperature level.]]&lt;br /&gt;
&lt;br /&gt;
===A look at the Parameters &#039;&#039;B&#039;&#039; and &#039;&#039;n&#039;&#039;===&lt;br /&gt;
Depending on which stress type is kept constant, it can be seen from the linearized T-NT relationship that either the parameter &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; or the parameter &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the slope of the resulting line. If, for example, the non-thermal stress is kept constant then &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the slope of the life line in a Life vs. Temperature plot. The steeper the slope, the greater the dependency of the product&#039;s life to the temperature. In other words, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is a measure of the effect that temperature has on the life and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is a measure of the effect that the non-thermal stress has on the life. The larger the value of &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; the higher the dependency of the life on the temperature. Similarly, the larger the value of &amp;lt;math&amp;gt;n,\,\!&amp;lt;/math&amp;gt; the higher the dependency of the life on the non-thermal stress.&lt;br /&gt;
&lt;br /&gt;
===Acceleration Factor===&lt;br /&gt;
The acceleration factor for the T-NT relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{F}}=\frac{{{L}_{USE}}}{{{L}_{Accelerated}}}=\frac{\tfrac{C}{U_{u}^{n}}{{e}^{\tfrac{B}{{{V}_{u}}}}}}{\tfrac{C}{U_{A}^{n}}{{e}^{\tfrac{B}{{{V}_{A}}}}}}={{\left( \frac{{{U}_{A}}}{{{U}_{u}}} \right)}^{n}}{{e}^{B\left( \tfrac{1}{{{V}_{u}}}-\tfrac{1}{{{V}_{A}}} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{L}_{USE}}\,\!&amp;lt;/math&amp;gt; is the life at use stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{L}_{Accelerated}}\,\!&amp;lt;/math&amp;gt; is the life at the accelerated stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{u}}\,\!&amp;lt;/math&amp;gt; is the use temperature level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{A}}\,\!&amp;lt;/math&amp;gt; is the accelerated temperature level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{A}}\,\!&amp;lt;/math&amp;gt; is the accelerated non-thermal level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{u}}\,\!&amp;lt;/math&amp;gt; is the use non-thermal level.&lt;br /&gt;
&lt;br /&gt;
The acceleration factor is plotted versus stress in the same manner used to create the Life vs. Stress plots. That is, one stress type is kept constant and the other is varied.&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \overline{T}= &amp;amp; \int\limits_{0}^{\infty }t\cdot f(t,U,V)dt = &amp;amp; \int\limits_{0}^{\infty }t\cdot \frac{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}{{e}^{-\tfrac{t\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}dt = &amp;amp; \frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA10.3.png|center|450px|Acceleration Factor vs. Temperature at a fixed voltage level.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA10.4.png|center|450px|Acceleration Factor vs. Voltage at a fixed temperature level.]]&lt;br /&gt;
&lt;br /&gt;
=T-NT Exponential=&lt;br /&gt;
By setting &amp;lt;math&amp;gt;m=L(U,V)\,\!&amp;lt;/math&amp;gt;, the exponential &#039;&#039;pdf&#039;&#039; becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,U,V)=\frac{{{U}^{n}}}{C}{{e}^{-\tfrac{B}{V}}}\cdot {{e}^{-\tfrac{{{U}^{n}}}{C}\left( {{e}^{-\tfrac{B}{V}}} \right)t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T-NT Exponential Statistical Properties Summary==&lt;br /&gt;
===Mean or MTTF===&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T},\,\!&amp;lt;/math&amp;gt; or Mean Time To Failure (MTTF) for the T-NT exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \overline{T}= &amp;amp; \int\limits_{0}^{\infty }t\cdot f(t,U,V)dt = &amp;amp; \int\limits_{0}^{\infty }t\cdot \frac{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}{{e}^{-\tfrac{t\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}dt = &amp;amp; \frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Median===&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; for the T-NT exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=\frac{1}{\lambda }0.693=0.693\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode===&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; for the T-NT exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Standard Deviation===&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, for the T-NT exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=\frac{1}{\lambda }=m=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-NT Exponential Reliability Function===&lt;br /&gt;
The T-NT exponential reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,U,V)={{e}^{-\tfrac{T\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function is the complement of the T-NT exponential cumulative distribution function or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,U,V)=1-Q(T,U,V)=1-\int_{0}^{T}f(T)dT\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,U,V)=1-\int_{0}^{T}\frac{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}{{e}^{-\tfrac{T\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}dT={{e}^{-\tfrac{T\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability===&lt;br /&gt;
The conditional reliability function for the T-NT exponential model is given by,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),U,V)=\frac{R(T+t,U,V)}{R(T,U,V)}=\frac{{{e}^{-\lambda (T+t)}}}{{{e}^{-\lambda T}}}={{e}^{-\tfrac{t\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
&lt;br /&gt;
For the T-NT exponential model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({{t}_{R}},U,V)={{e}^{-\tfrac{{{t}_{R}}\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln [R({{t}_{R}},U,V)]{{=}^{-\tfrac{{{t}_{R}}\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=-\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}\ln [R({{t}_{R}},U,V)]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
Substituting the T-NT relationship into the exponential log-likelihood equation yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{U_{i}^{n}}{C}{{e}^{-\tfrac{B}{{{V}_{i}}}}}\cdot {{e}^{-\tfrac{U_{i}^{n}}{C}\left( {{e}^{-\tfrac{B}{{{V}_{i}}}}} \right){{T}_{i}}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{U_{i}^{n}}{C}\left( {{e}^{-\tfrac{B}{{{V}_{i}}}}} \right)T_{i}^{\prime }+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-\tfrac{T_{Li}^{\prime \prime }}{C}U_{i}^{\prime \prime n}{{e}^{-\tfrac{B}{{{V}_{i}}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-\tfrac{T_{Ri}^{\prime \prime }}{C}U_{i}^{\prime \prime n}{{e}^{-\tfrac{B}{{{V}_{i}}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the T-NT parameter (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the second T-NT parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the third T-NT parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the temperature level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{i}}\,\!&amp;lt;/math&amp;gt; is the non-thermal stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial C}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=T-NT Weibull=&lt;br /&gt;
By setting &amp;lt;math&amp;gt;\eta =L(U,V)\,\!&amp;lt;/math&amp;gt;, the T-NT Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,U,V)=\frac{\beta {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}{{\left( \frac{t\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{t\cdot {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T-NT Weibull Statistical Properties Summary==&lt;br /&gt;
===Mean or MTTF===&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T}\,\!&amp;lt;/math&amp;gt;, for the T-NT Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}\cdot \Gamma \left( \frac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Gamma \left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt; is the gamma function evaluated at the value of &amp;lt;math&amp;gt;\left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
===Median===&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; for the T-NT Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{{\left( \ln 2 \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode===&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; for the T-NT Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{{\left( 1-\frac{1}{\beta } \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Standard Deviation===&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}},\,\!&amp;lt;/math&amp;gt; for the T-NT Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}\cdot \sqrt{\Gamma \left( \frac{2}{\beta }+1 \right)-{{\left( \Gamma \left( \frac{1}{\beta }+1 \right) \right)}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-NT Weibull Reliability Function===&lt;br /&gt;
The T-NT Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,U,V)={{e}^{-{{\left( \tfrac{T{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability Function===&lt;br /&gt;
The T-NT Weibull conditional reliability function at a specified stress level is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,t,U,V)=\frac{R(T+t,U,V)}{R(T,U,V)}=\frac{{{e}^{-{{\left( \tfrac{\left( T+t \right){{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta }}}}}{{{e}^{-{{\left( \tfrac{T{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,t,U,V)={{e}^{-\left[ {{\left( \tfrac{\left( T+t \right){{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta }}-{{\left( \tfrac{T{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta }} \right]}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
For the T-NT Weibull model, the reliable life, &amp;lt;math&amp;gt;{{T}_{R}}\,\!&amp;lt;/math&amp;gt;, of a unit for a specified reliability and starting the mission at age zero is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{T}_{R}}=\frac{C}{{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{{\left\{ -\ln \left[ R\left( {{T}_{R}},U,V \right) \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-NT Weibull Failure Rate Function===&lt;br /&gt;
The T-NT Weibull failure rate function, &amp;lt;math&amp;gt;\lambda (T)\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda \left( T,U,V \right)=\frac{f\left( T,U,V \right)}{R\left( T,U,V \right)}=\frac{\beta {{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C}{{\left( \frac{T{{U}^{n}}{{e}^{-\tfrac{B}{V}}}}{C} \right)}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
Substituting the T-NT relationship into the Weibull log-likelihood function yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{\beta U_{i}^{n}{{e}^{-\tfrac{B}{{{V}_{i}}}}}}{C}{{\left( \frac{U_{i}^{n}{{e}^{-\tfrac{B}{{{V}_{i}}}}}}{C}{{T}_{i}} \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{U_{i}^{n}{{e}^{-\tfrac{B}{{{V}_{i}}}}}}{C}{{T}_{i}} \right)}^{\beta }}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( \frac{U_{i}^{n}{{e}^{-\tfrac{B}{{{V}_{i}}}}}}{C}T_{i}^{\prime } \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-{{\left( \tfrac{T_{Li}^{\prime \prime }}{C}U_{i}^{\prime \prime n}{{e}^{-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-{{\left( \tfrac{T_{Ri}^{\prime \prime }}{C}U_{i}^{\prime \prime n}{{e}^{-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter (unknown, the first of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the first T-NT parameter (unknown, the second of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the second T-NT parameter (unknown, the third of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the third T-NT parameter (unknown, the fourth of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the temperature level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{i}}\,\!&amp;lt;/math&amp;gt; is the non-thermal stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial C}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \beta }=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=T-NT Lognormal=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Temperature-Nonthermal_Relationship_Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the lognormal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\overline{{{T}&#039;}}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{T}&#039;=\ln (T)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T=\,\!&amp;lt;/math&amp;gt; times-to-failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\overline{{{T}&#039;}}=\,\!&amp;lt;/math&amp;gt; mean of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\,\!&amp;lt;/math&amp;gt; standard deviation of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
The median of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}={{e}^{{{\overline{T}}^{\prime }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The T-NT lognormal model &#039;&#039;pdf&#039;&#039; can be obtained by setting &amp;lt;math&amp;gt;\breve{T}=L(V)\,\!&amp;lt;/math&amp;gt;. Therefore: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=L(V)=\frac{C}{{{U}^{n}}}{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{e}^{{{\overline{T}}^{\prime }}}}=\frac{C}{{{U}^{n}}}{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\overline{T}}^{\prime }}=\ln (C)-n\ln (U)+\frac{B}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting the above equation into the lognormal &#039;&#039;pdf&#039;&#039; yields the T-NT lognormal model &#039;&#039;pdf&#039;&#039; or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T,U,V)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (C)+n\ln (U)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T-N-T Lognormal Statistical Properties Summary==&lt;br /&gt;
===The Mean===&lt;br /&gt;
The mean life of the T-NT lognormal model (mean of the times-to-failure), &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \bar{T}= &amp;amp; {{e}^{\bar{{T}&#039;}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}} = &amp;amp; {{e}^{\ln (C)-n\ln (U)+\tfrac{B}{V}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The mean of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\bar{T}}^{^{\prime }}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{T}}^{\prime }}=\ln \left( {\bar{T}} \right)-\frac{1}{2}\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Standard Deviation===&lt;br /&gt;
The standard deviation of the T-NT lognormal model (standard deviation of the times-to-failure), &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma }_{T}}= &amp;amp; \sqrt{\left( {{e}^{2\bar{{T}&#039;}+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)} = &amp;amp; \sqrt{\left( {{e}^{2\left( \ln (C)-n\ln (U)+\tfrac{B}{V} \right)+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The standard deviation of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\sqrt{\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Mode===&lt;br /&gt;
The mode of the T-NT lognormal model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \tilde{T}= &amp;amp; {{e}^{{{\overline{T}}^{\prime }}-\sigma _{{{T}&#039;}}^{2}}} = &amp;amp; {{e}^{\ln (C)-n\ln (U)+\tfrac{B}{V}-\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-NT Lognormal Reliability===&lt;br /&gt;
For the T-NT lognormal model, the reliability for a mission of time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, starting at age 0, for the T-NT lognormal model is determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,U,V)=\int_{T}^{\infty }f(t,U,V)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,U,V)=\int_{{{T}^{^{\prime }}}}^{\infty }\frac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\ln (C)+n\ln (U)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
For the T-NT lognormal model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is estimated by first solving the reliability equation with respect to time, as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;T_{R}^{\prime }=\ln (C)-n\ln (U)+\frac{B}{V}+z\cdot {{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z={{\Phi }^{-1}}\left[ F\left( T_{R}^{\prime },U,V \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;,U,V)}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;{T}&#039;=\ln (T)\,\!&amp;lt;/math&amp;gt; the reliable life, &amp;lt;math&amp;gt;{{t}_{R}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}={{e}^{T_{R}^{\prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Lognormal Failure Rate===&lt;br /&gt;
The T-NT lognormal failure rate is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (T,U,V)=\frac{f(T,U,V)}{R(T,U,V)}=\frac{\tfrac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (C)+n\ln (U)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}}{\int_{{{T}&#039;}}^{\infty }\tfrac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (C)+n\ln (U)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
The complete T-NT lognormal log-likelihood function is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{{{\sigma }_{{{T}&#039;}}}{{T}_{i}}}{{\phi }_{pdf}}\left( \frac{\ln \left( {{T}_{i}} \right)-\ln (C)+n\ln ({{U}_{i}})-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] \text{ }+\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\ln \left[ 1-\Phi \left( \frac{\ln \left( T_{i}^{\prime } \right)-\ln (C)+n\ln ({{U}_{i}})-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Ri}^{\prime \prime }=\frac{\ln T_{Ri}^{\prime \prime }-\ln C+n\ln U_{i}^{\prime \prime }-\tfrac{B}{{{V}_{i}}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Li}^{\prime \prime }=\frac{\ln T_{Li}^{\prime \prime }-\ln C+n\ln U_{i}^{\prime \prime }-\tfrac{B}{{{V}_{i}}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\phi \left( x \right)=\frac{1}{\sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( x \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; is the standard deviation of the natural logarithm of the times-to-failure (unknown, the first of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the first T-NT parameter (unknown, the second of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the second T-NT parameter (unknown, the third of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the third T-NT parameter (unknown, the fourth of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level for the first stress type (i.e., temperature) of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level for the second stress type (i.e., non-thermal) of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial C}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===T-NT Lognormal Example===&lt;br /&gt;
{{:Temperature-Nonthermal_Relationship_Example}}&lt;br /&gt;
&lt;br /&gt;
= T-NT Confidence Bounds =&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the T-NT Exponential==&lt;br /&gt;
===Confidence Bounds on the Mean Life===&lt;br /&gt;
The mean life for the T-NT model is given by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt;. The upper &amp;lt;math&amp;gt;({{m}_{U}})\,\!&amp;lt;/math&amp;gt; and lower &amp;lt;math&amp;gt;({{m}_{L}})\,\!&amp;lt;/math&amp;gt; bounds on the mean life (ML estimate of the mean life) are estimated by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{U}}=\widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{L}}=\widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level, then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds, and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds. The variance of &amp;lt;math&amp;gt;\widehat{m}\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{m})= &amp;amp; {{\left( \frac{\partial m}{\partial B} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\partial m}{\partial C} \right)}^{2}}Var(\widehat{C}) +{{\left( \frac{\partial m}{\partial n} \right)}^{2}}Var(\widehat{b}) +2\left( \frac{\partial m}{\partial B} \right)\left( \frac{\partial m}{\partial C} \right)Cov(\widehat{B},\widehat{C}) \\ &lt;br /&gt;
 &amp;amp;  +2\left( \frac{\partial m}{\partial B} \right)\left( \frac{\partial m}{\partial n} \right)Cov(\widehat{B},\widehat{n}) +2\left( \frac{\partial m}{\partial C} \right)\left( \frac{\partial m}{\partial n} \right)Cov(\widehat{C},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{m})= &amp;amp; \frac{1}{{{U}^{2\widehat{n}}}}{{e}^{2\tfrac{\widehat{B}}{V}}}[\frac{{{\widehat{C}}^{2}}}{{{V}^{2}}}Var(\widehat{B})+Var(\widehat{C}) +{{\widehat{C}}^{2}}{{\left( \ln (U) \right)}^{2}}Var(\widehat{n}) +\frac{2\widehat{C}}{V}Cov(\widehat{B},\widehat{C}) \\ &lt;br /&gt;
 &amp;amp;  -\frac{2{{\widehat{C}}^{2}}\ln (U)}{V}Cov(\widehat{B},\widehat{n}) -2\widehat{C}\ln (U)Cov(\widehat{C},\widehat{n})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariance of &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{B}) &amp;amp; Cov(\widehat{B},\widehat{C}) &amp;amp; Cov(\widehat{B},\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{C},\widehat{B}) &amp;amp; Var(\widehat{C}) &amp;amp; Cov(\widehat{C},\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{n},\widehat{B}) &amp;amp; Cov(\widehat{n},\widehat{C}) &amp;amp; Var(\widehat{n})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ F \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F=\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{C}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{n}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right].\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The bounds on reliability at a given time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, are estimated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{U}}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{L}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time for a given reliability (ML estimate of time) are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{T}=-\widehat{m}\cdot \ln (R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding confidence bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; -{{m}_{U}}\cdot \ln (R) \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; -{{m}_{L}}\cdot \ln (R)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the T-NT Weibull==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
Using the same approach as previously discussed ( &amp;lt;math&amp;gt;\widehat{\beta }\,\!&amp;lt;/math&amp;gt; and &lt;br /&gt;
&amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; positive parameters): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\beta }_{U}}= &amp;amp; \widehat{\beta }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{\beta }_{L}}= &amp;amp; \widehat{\beta }\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{B}_{U}}= &amp;amp; \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})} \\ &lt;br /&gt;
 &amp;amp; {{B}_{L}}= &amp;amp; \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{A})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{C}_{U}}= &amp;amp; \widehat{C}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}} \\ &lt;br /&gt;
 &amp;amp; {{C}_{L}}= &amp;amp; \widehat{C}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{n}_{U}}= &amp;amp; \widehat{n}+{{K}_{\alpha }}\sqrt{Var(\widehat{n})} \\ &lt;br /&gt;
 &amp;amp; {{n}_{L}}= &amp;amp; \widehat{n}-{{K}_{\alpha }}\sqrt{Var(\widehat{n})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are estimated from the Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{\beta }) &amp;amp; Cov(\widehat{\beta },\widehat{B}) &amp;amp; Cov(\widehat{\beta },\widehat{C}) &amp;amp; Cov(\widehat{\beta },\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{B},\widehat{\beta }) &amp;amp; Var(\widehat{B}) &amp;amp; Cov(\widehat{B},\widehat{C}) &amp;amp; Cov(\widehat{B},\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{C},\widehat{\beta }) &amp;amp; Cov(\widehat{C},\widehat{B}) &amp;amp; Var(\widehat{C}) &amp;amp; Cov(\widehat{C},\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{n},\widehat{\beta }) &amp;amp; Cov(\widehat{n},\widehat{B}) &amp;amp; Cov(\widehat{n},\widehat{C}) &amp;amp; Var(\widehat{n})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ F \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F=\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\beta }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{C}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{n}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The reliability function (ML estimate) for the T-NT Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,U,V)={{e}^{-{{\left( \tfrac{{{U}^{\widehat{n}}}{{e}^{-\tfrac{\widehat{B}}{V}}}}{\widehat{C}}T \right)}^{\widehat{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,U,V)={{e}^{-{{e}^{\ln \left[ {{\left( \tfrac{{{U}^{\widehat{n}}}{{e}^{-\tfrac{\widehat{B}}{V}}}}{\widehat{C}}T \right)}^{\widehat{\beta }}} \right]}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\ln \left[ {{\left( \frac{{{U}^{\widehat{n}}}{{e}^{-\tfrac{\widehat{B}}{V}}}}{\widehat{C}}T \right)}^{\widehat{\beta }}} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\widehat{\beta }\left[ \ln (T)-\frac{\widehat{B}}{V}-\ln (\widehat{C})+\widehat{n}\ln (U) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,U,V)={{e}^{-e\widehat{^{u}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to find the upper and lower bounds on &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial B} \right)}^{2}}Var(\widehat{B}) +{{\left( \frac{\partial \widehat{u}}{\partial C} \right)}^{2}}Var(\widehat{C})+{{\left( \frac{\partial \widehat{u}}{\partial n} \right)}^{2}}Var(\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{\beta },\widehat{B}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{\beta },\widehat{C}) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{\beta },\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial B} \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{B},\widehat{C}) +2\left( \frac{\partial \widehat{u}}{\partial B} \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{B},\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial C} \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{C},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; {{\left( \frac{\widehat{u}}{\widehat{\beta }} \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\widehat{\beta }}{V} \right)}^{2}}Var(\widehat{B}) +{{\left( \frac{\widehat{\beta }}{\widehat{C}} \right)}^{2}}Var(\widehat{C})+{{\left( \widehat{\beta }\ln (U) \right)}^{2}}Var(\widehat{n}) -\frac{2\widehat{u}}{V}Cov(\widehat{\beta },\widehat{B})-\frac{2\widehat{u}}{\widehat{C}}Cov(\widehat{\beta },\widehat{C}) \\ &lt;br /&gt;
 &amp;amp; +2\widehat{u}\ln (U)Cov(\widehat{\beta },\widehat{n}) +\frac{2{{\widehat{\beta }}^{2}}}{\widehat{C}V}Cov(\widehat{B},\widehat{C})-\frac{2{{\widehat{\beta }}^{2}}\ln (U)}{V}Cov(\widehat{B},\widehat{n}) -\frac{2{{\widehat{\beta }}^{2}}\ln (U)}{\widehat{C}}Cov(\widehat{C},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{L}} \right)}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{U}} \right)}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (R)=\ &amp;amp; -{{\left( \frac{{{U}^{\widehat{n}}}{{e}^{-\tfrac{\widehat{B}}{V}}}}{\widehat{C}}\widehat{T} \right)}^{\widehat{\beta }}} \\ &lt;br /&gt;
 \ln (-\ln (R))=\ &amp;amp; \widehat{\beta }\left( \ln (\widehat{T})-\frac{\widehat{B}}{V}-\ln (\widehat{C})+\widehat{n}\ln (U) \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\frac{1}{\widehat{\beta }}\ln (-\ln (R))+\frac{\widehat{B}}{V}+\ln (\widehat{C})-\widehat{n}\ln (U)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\widehat{u}=\ln \widehat{T}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial B} \right)}^{2}}Var(\widehat{B}) +{{\left( \frac{\partial \widehat{u}}{\partial C} \right)}^{2}}Var(\widehat{C})+{{\left( \frac{\partial \widehat{u}}{\partial n} \right)}^{2}}Var(\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{\beta },\widehat{B}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{\beta },\widehat{C}) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{\beta },\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial B} \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{B},\widehat{C}) +2\left( \frac{\partial \widehat{u}}{\partial B} \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{B},\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial C} \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{C},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; \frac{1}{{{\widehat{\beta }}^{4}}}{{\left[ \ln (-\ln (R)) \right]}^{2}}Var(\widehat{\beta }) +\frac{1}{{{V}^{2}}}Var(\widehat{B})+\frac{1}{{{\widehat{C}}^{2}}}Var(\widehat{C})+{{\left[ \ln (U) \right]}^{2}}Var(\widehat{n}) -\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}V}Cov(\widehat{\beta },\widehat{B}) -\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}\widehat{C}}Cov(\widehat{\beta },\widehat{C}) \\ &lt;br /&gt;
 &amp;amp; +\frac{2\ln (-\ln (R))\ln (U)}{{{\widehat{\beta }}^{2}}}Cov(\widehat{\beta },\widehat{n}) +\frac{2}{\widehat{C}V}Cov(\widehat{B},\widehat{C}) -\frac{2\ln (U)}{V}Cov(\widehat{B},\widehat{n})-\frac{2\ln (U)}{\widehat{C}}Cov(\widehat{C},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on time are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{{{u}_{U}}}} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{{{u}_{L}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the T-NT Lognormal==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
Since the standard deviation, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; are positive parameters, &amp;lt;math&amp;gt;\ln ({{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\ln (\widehat{C})\,\!&amp;lt;/math&amp;gt; are treated as normally distributed and the bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{\sigma }_{U}}=\ &amp;amp; {{\widehat{\sigma }}_{{{T}&#039;}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}} &amp;amp;\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 {{\sigma }_{L}}=\ &amp;amp; \frac{{{\widehat{\sigma }}_{{{T}&#039;}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}} &amp;amp;\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{C}_{U}}= &amp;amp; \widehat{C}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}} &amp;amp;\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 {{C}_{L}}= &amp;amp; \frac{\widehat{A}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}}} &amp;amp;\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The lower and upper bounds on &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{B}_{U}}= &amp;amp; \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; {{B}_{L}}= &amp;amp; \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{B})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{n}_{U}}= &amp;amp; \widehat{n}+{{K}_{\alpha }}\sqrt{Var(\widehat{n})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; {{n}_{L}}= &amp;amp; \widehat{n}-{{K}_{\alpha }}\sqrt{Var(\widehat{n})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;C,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left( \begin{matrix}&lt;br /&gt;
   Var\left( {{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{B} \right) &amp;amp; Var\left( \widehat{B} \right) &amp;amp; Cov\left( \widehat{B},\widehat{C} \right) &amp;amp; Cov\left( \widehat{B},\widehat{n} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{C} \right) &amp;amp; Cov\left( \widehat{C},\widehat{B} \right) &amp;amp; Var\left( \widehat{C} \right) &amp;amp; Cov\left( \widehat{C},\widehat{n} \right)  \\&lt;br /&gt;
   Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{n},\widehat{B} \right) &amp;amp; Cov\left( \widehat{n},\widehat{C} \right) &amp;amp; Var\left( \widehat{n} \right)  \\&lt;br /&gt;
\end{matrix} \right)={{\left[ F \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F=\left( \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma _{{{T}&#039;}}^{2}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{C}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial C} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{n}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bounds on Reliability===&lt;br /&gt;
The reliability of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({T}&#039;,U,V;B,C,n,{{\sigma }_{{{T}&#039;}}})=\int_{{{T}&#039;}}^{\infty }\frac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\ln (\widehat{C})+\widehat{n}\ln ({{U}_{i}})-\tfrac{\widehat{B}}{{{V}_{i}}}}{{{\widehat{\sigma }}_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\widehat{z}(t,U,V;B,C,n,{{\sigma }_{T}})=\tfrac{t-\ln (\widehat{C})+\widehat{n}\ln (U)-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}},\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;\tfrac{d\widehat{z}}{dt}=\tfrac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
For &amp;lt;math&amp;gt;t={T}&#039;\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{z}=\tfrac{{T}&#039;-\ln (\widehat{C})+\widehat{n}\ln (U)-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}\,\!&amp;lt;/math&amp;gt;, and for &amp;lt;math&amp;gt;t=\infty ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{z}=\infty .\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The above equation then becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(\widehat{z})=\int_{\widehat{z}({T}&#039;,U,V)}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The bounds on &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{z}_{U}}= &amp;amp; \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ &lt;br /&gt;
 &amp;amp; {{z}_{L}}= &amp;amp; \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{z})= &amp;amp; \left( \frac{\partial \widehat{z}}{\partial B} \right)_{\widehat{B}}^{2}Var(\widehat{B})+\left( \frac{\partial \widehat{z}}{\partial C} \right)_{\widehat{C}}^{2}Var(\widehat{C}) +\left( \frac{\partial \widehat{z}}{\partial n} \right)_{\widehat{b}}^{2}Var(\widehat{n})+\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)_{{{\widehat{\sigma }}_{{{T}&#039;}}}}^{2}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}{{\left( \frac{\partial \widehat{z}}{\partial C} \right)}_{\widehat{C}}}Cov\left( \widehat{B},\widehat{C} \right) \\ &lt;br /&gt;
 &amp;amp; +2{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}{{\left( \frac{\partial \widehat{z}}{\partial b} \right)}_{\widehat{n}}}Cov\left( \widehat{B},\widehat{n} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial C} \right)}_{\widehat{C}}}{{\left( \frac{\partial \widehat{z}}{\partial n} \right)}_{\widehat{n}}}Cov\left( \widehat{C},\widehat{n} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) \\&lt;br /&gt;
&amp;amp; +2{{\left( \frac{\partial \widehat{z}}{\partial C} \right)}_{\widehat{C}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial n} \right)}_{\widehat{n}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{z})= &amp;amp; \frac{1}{\widehat{\sigma }_{{{T}&#039;}}^{2}}[\frac{1}{{{V}^{2}}}Var(\widehat{B})+\frac{1}{{{C}^{2}}}Var(\widehat{C})+\ln {{(U)}^{2}}Var(\widehat{n})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2}{C\cdot V}Cov\left( \widehat{B},\widehat{C} \right)-\frac{2\ln (U)}{V}Cov\left( \widehat{B},\widehat{n} \right) \\ &lt;br /&gt;
 &amp;amp; -\frac{2\ln (U)}{C}Cov\left( \widehat{C},\widehat{n} \right)+\frac{2\widehat{z}}{V}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +\frac{2\widehat{z}}{C}Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)-2\widehat{z}\ln (U)Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds around time for a given lognormal percentile (unreliability) are estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{T}&#039;(U,V;\widehat{B},\widehat{C},\widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}})=\ln (\widehat{C})+\widehat{n}\ln (U)-\frac{\widehat{B}}{V}+z\cdot {{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {T}&#039;(U,V;\widehat{A},\widehat{\phi },\widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}})=\ &amp;amp; \ln (T) \\ &lt;br /&gt;
 z=\ &amp;amp; {{\Phi }^{-1}}\left[ F({T}&#039;) \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;,U,V)}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to calculate the variance of &amp;lt;math&amp;gt;{T}&#039;(U,V;\widehat{B},\widehat{C},\widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var({T}&#039;)= &amp;amp; {{\left( \frac{\partial {T}&#039;}{\partial B} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\partial {T}&#039;}{\partial C} \right)}^{2}}Var(\widehat{C}) +{{\left( \frac{\partial {T}&#039;}{\partial n} \right)}^{2}}Var(\widehat{n})+{{\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2\left( \frac{\partial {T}&#039;}{\partial B} \right)\left( \frac{\partial {T}&#039;}{\partial C} \right)Cov\left( \widehat{B},\widehat{C} \right) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial {T}&#039;}{\partial B} \right)\left( \frac{\partial {T}&#039;}{\partial n} \right)Cov\left( \widehat{B},\widehat{n} \right) +2\left( \frac{\partial {T}&#039;}{\partial C} \right)\left( \frac{\partial {T}&#039;}{\partial n} \right)Cov\left( \widehat{C},\widehat{n} \right) +2\left( \frac{\partial {T}&#039;}{\partial B} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial {T}&#039;}{\partial C} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2\left( \frac{\partial {T}&#039;}{\partial n} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; T_{U}^{\prime }= &amp;amp; \ln {{T}_{U}}={T}&#039;+{{K}_{\alpha }}\sqrt{Var({T}&#039;)} \\ &lt;br /&gt;
 &amp;amp; T_{L}^{\prime }= &amp;amp; \ln {{T}_{L}}={T}&#039;-{{K}_{\alpha }}\sqrt{Var({T}&#039;)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for &amp;lt;math&amp;gt;{{T}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{T}_{L}}\,\!&amp;lt;/math&amp;gt; yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{T_{U}^{\prime }}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{T_{L}^{\prime }}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Temperature-Humidity_Relationship&amp;diff=64932</id>
		<title>Temperature-Humidity Relationship</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Temperature-Humidity_Relationship&amp;diff=64932"/>
		<updated>2017-02-08T21:12:29Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional Reliability Function */ changed R(T,t,V,U) to R((t|T),V,U)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|7}}&lt;br /&gt;
The Temperature-Humidity (T-H) relationship, a variation of the Eyring relationship, has been proposed for predicting the life at use conditions when temperature and humidity are the accelerated stresses in a test. This combination model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V,U)=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; is one of the three parameters to be determined.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; is the second of the three parameters to be determined (also known as the activation energy for humidity).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is a constant and the third of the three parameters to be determined.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; is the relative humidity  (decimal or percentage).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; is temperature (&#039;&#039;&#039;in absolute units&#039;&#039;&#039;). &lt;br /&gt;
&lt;br /&gt;
The T-H relationship can be linearized and plotted on a Life vs. Stress plot. The relationship is linearized by taking the natural logarithm of both sides in the T-H relationship, or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;ln(L(V,U))=ln(A)+\frac{\phi }{V}+\frac{b}{U}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since life is now a function of two stresses, a Life vs. Stress plot can only be obtained by keeping one of the two stresses constant and varying the other one. Doing so will yield a straight line where the term for the stress which is kept at a fixed value becomes another constant (in addition to the &amp;lt;math&amp;gt;\ln (A)\,\!&amp;lt;/math&amp;gt; constant). In the next two figures, data obtained from a temperature and humidity test were analyzed and plotted on Arrhenius paper. In the first figure, life is plotted versus temperature with relative humidity held at a fixed value. In the second figure, life is plotted versus relative humidity with temperature held at a fixed value.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA9.1.png|center|400px|Life vs. Temperature plot at a fixed relative humidity.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA9.2.png|center|400px|Life vs. Relative Humidity plot at a fixed temperature.]]&lt;br /&gt;
&lt;br /&gt;
Note that the Life vs. Stress plots are plotted on a log-reciprocal scale. Also note that the points shown in these plots represent the life characteristics at the test stress levels (the data set was fitted to a Weibull distribution, thus the points represent the scale parameter, &amp;lt;math&amp;gt;\eta )\,\!&amp;lt;/math&amp;gt;. For example, the points shown in the first figure represent &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; at each of the test temperature levels (two temperature levels were considered in this test).&lt;br /&gt;
&lt;br /&gt;
===A look at the Parameters Phi and b===&lt;br /&gt;
Depending on which stress type is kept constant, it can be seen from the linearized T-H relationship that either the parameter &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; or the parameter &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; is the slope of the resulting line. If, for example, the humidity is kept constant then &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; is the slope of the life line in a Life vs. Temperature plot. The steeper the slope, the greater the dependency of product life to the temperature. In other words, &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; is a measure of the effect that temperature has on the life, and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; is a measure of the effect that relative humidity has on the life. The larger the value of &amp;lt;math&amp;gt;\phi ,\,\!&amp;lt;/math&amp;gt; the higher the dependency of the life on the temperature. Similarly, the larger the value of &amp;lt;math&amp;gt;b,\,\!&amp;lt;/math&amp;gt; the higher the dependency of the life on the humidity.&lt;br /&gt;
&lt;br /&gt;
===T-H Data===&lt;br /&gt;
When using the T-H relationship, the effect of both temperature and humidity on life is sought. For this reason, the test must be performed in a combination manner between the different stress levels of the two stress types. For example, assume that an accelerated test is to be performed at two temperature and two humidity levels. The two temperature levels were chosen to be 300K and 343K. The two humidity levels were chosen to be 0.6 and 0.8. It would be wrong to perform the test at (300K, 0.6) and (343K, 0.8). Doing so would not provide information about the temperature-humidity effects on life. This is because both stresses are increased at the same time and therefore it is unknown which stress is causing the acceleration on life. A possible combination that would provide information about temperature-humidity effects on life would be (300K, 0.6), (300K, 0.8) and (343K, 0.8). It is clear that by testing at (300K, 0.6) and (300K, 0.8) the effect of humidity on life can be determined (since temperature remained constant). Similarly the effects of temperature on life can be determined by testing at (300K, 0.8) and (343K, 0.8) since humidity remained constant.&lt;br /&gt;
&lt;br /&gt;
===Acceleration Factor===&lt;br /&gt;
The acceleration factor for the T-H relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{F}}=\frac{{{L}_{USE}}}{{{L}_{Accelerated}}}=\frac{A{{e}^{\tfrac{\phi }{{{V}_{u}}}+\tfrac{b}{{{U}_{u}}}}}}{A{{e}^{\tfrac{\phi }{{{V}_{A}}}+\tfrac{b}{{{U}_{A}}}}}}={{e}^{\phi \left( \tfrac{1}{{{V}_{u}}}-\tfrac{1}{{{V}_{A}}} \right)+b\left( \tfrac{1}{{{U}_{u}}}-\tfrac{1}{{{U}_{A}}} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{L}_{USE}}\,\!&amp;lt;/math&amp;gt; is the life at use stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{L}_{Accelerated}}\,\!&amp;lt;/math&amp;gt; is the life at the accelerated stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{u}}\,\!&amp;lt;/math&amp;gt; is the use temperature level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{A}}\,\!&amp;lt;/math&amp;gt; is the accelerated temperature level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{A}}\,\!&amp;lt;/math&amp;gt; is the accelerated humidity level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{u}}\,\!&amp;lt;/math&amp;gt; is the use humidity level.&lt;br /&gt;
&lt;br /&gt;
The acceleration Factor is plotted versus stress in the same manner used to create the Life vs. Stress plots. That is, one stress type is kept constant and the other is varied as shown in the next two figures.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA9.3.png|center|400px|Acceleration Factor vs. Temperature at a fixed relative humidity.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA9.4.png|center|400px|Acceleration Factor vs. Humidity at a fixed temperature.]]&lt;br /&gt;
&lt;br /&gt;
=T-H Exponential=&lt;br /&gt;
By setting &amp;lt;math&amp;gt;m=L(U,V)\,\!&amp;lt;/math&amp;gt; in the exponential &#039;&#039;pdf&#039;&#039; we can obtain the T-H exponential &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V,U)=\frac{1}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}\cdot {{e}^{-\tfrac{t}{A}\cdot {{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T-H Exponential Statistical Properties Summary==&lt;br /&gt;
===Mean or MTTF===&lt;br /&gt;
&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T},\,\!&amp;lt;/math&amp;gt; or Mean Time To Failure (MTTF) for the T-H exponential model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=\int_{0}^{\infty }t\cdot f(t,V,U)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting the T-H exponential &#039;&#039;pdf&#039;&#039; equation yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \overline{T}= &amp;amp; \int_{0}^{\infty }t\cdot \frac{1}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}{{e}^{-\tfrac{t}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}dt =\ &amp;amp; A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Median===&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; for the T-H exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=0.693\cdot A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode===&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; for the T-H exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Standard Deviation===&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, for the T-H exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-H Exponential Reliability Function===&lt;br /&gt;
The T-H exponential reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V,U)={{e}^{-\tfrac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function is the complement of the T-H exponential cumulative distribution function or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V,U)=1-Q(T,V,U)=1-\int_{0}^{T}f(T)dT\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V,U)=1-\int_{0}^{T}\frac{1}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}{{e}^{-\tfrac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}dT={{e}^{-\tfrac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability===&lt;br /&gt;
The conditional reliability function for the T-H exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V,U)=\frac{R(T+t,V,U)}{R(T,V,U)}=\frac{{{e}^{-\lambda (T+t)}}}{{{e}^{-\lambda T}}}={{e}^{-\tfrac{t}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
For the T-H exponential model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({{t}_{R}},V,U)={{e}^{-\tfrac{{{t}_{R}}}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln [R({{t}_{R}},V,U)]=-\frac{{{t}_{R}}}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=-A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\ln [R({{t}_{R}},V,U)]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
&lt;br /&gt;
Substituting the T-H model into the exponential log-likelihood equation yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}}\cdot {{e}^{-\tfrac{{{T}_{i}}}{A}\cdot {{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{T_{i}^{\prime }}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-\tfrac{T_{Li}^{\prime \prime }}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{U_{i}^{\prime \prime }} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-\tfrac{T_{Ri}^{\prime \prime }}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{U_{i}^{\prime \prime }} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the T-H parameter (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; is the second T-H parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; is the third T-H parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the temperature level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{i}}\,\!&amp;lt;/math&amp;gt; is the relative humidity level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime}\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;A,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial A}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \phi }=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial b}=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=T-H Weibull=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: T-H_Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
By setting &amp;lt;math&amp;gt;\eta =L(U,V)\,\!&amp;lt;/math&amp;gt; in the Weibull &#039;&#039;pdf&#039;&#039;, the T--H Weibull model&#039;s &#039;&#039;pdf&#039;&#039; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\,V,\,U)=\frac{\beta }{A}{{e}^{-\left( \tfrac{\varphi }{V}+\tfrac{b}{U} \right)}}{{\left( \frac{t}{A}{{e}^{-\left( \tfrac{\varphi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{t}{A}{{e}^{-\left( \tfrac{\varphi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T-H Weibull Statistical Properties Summary==&lt;br /&gt;
===Mean or MTTF===&lt;br /&gt;
&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T}\,\!&amp;lt;/math&amp;gt; (also called &amp;lt;math&amp;gt;MTTF\,\!&amp;lt;/math&amp;gt; ), of the T-H Weibull model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\cdot \Gamma \left( \frac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Gamma \left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt; is the gamma function evaluated at the value of &amp;lt;math&amp;gt;\left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Median===&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; of the T-H Weibull model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}{{\left( \ln 2 \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode===&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; of the T-H Weibull model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}{{\left( 1-\frac{1}{\beta } \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Standard Deviation===&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}},\,\!&amp;lt;/math&amp;gt; of the T-H Weibull model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\cdot \sqrt{\Gamma \left( \frac{2}{\beta }+1 \right)-{{\left( \Gamma \left( \frac{1}{\beta }+1 \right) \right)}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-H Weibull Reliability Function===&lt;br /&gt;
The T-H Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V,U)={{e}^{-{{\left( \tfrac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability Function===&lt;br /&gt;
The T-H Weibull conditional reliability function at a specified stress level is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V,U)=\frac{R(T+t,V,U)}{R(T,V,U)}=\frac{{{e}^{-{{\left( \tfrac{T+t}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta }}}}}{{{e}^{-{{\left( \tfrac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V,U)={{e}^{-\left[ {{\left( \tfrac{T+t}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta }}-{{\left( \tfrac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta }} \right]}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
For the T-H Weibull model, the reliable life, &amp;lt;math&amp;gt;{{t}_{R}}\,\!&amp;lt;/math&amp;gt;, of a unit for a specified reliability and starting the mission at age zero is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}{{\left\{ -\ln \left[ R\left( {{T}_{R}},V,U \right) \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-H Weibull Failure Rate Function===&lt;br /&gt;
The T-H Weibull failure rate function, &amp;lt;math&amp;gt;\lambda (T,V,U)\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda \left( T,V,U \right)=\frac{f\left( T,V,U \right)}{R\left( T,V,U \right)}=\frac{\beta }{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}{{\left( \frac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
Substituting the T-H model into the Weibull log-likelihood function yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{\beta }{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}}{{\left( \frac{{{T}_{i}}}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}} \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{{{T}_{i}}}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}} \right)}^{\beta }}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( \frac{T_{i}^{\prime }}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}} \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-{{\left( \tfrac{T_{Li}^{\prime \prime }}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{U_{i}^{\prime \prime }} \right)}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-{{\left( \tfrac{T_{Ri}^{\prime \prime }}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{U_{i}^{\prime \prime }} \right)}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter (unknown, the first of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the T-H parameter (unknown, the second of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; is the second T-H parameter (unknown, the third of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; is the third T-H parameter (unknown, the fourth of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the temperature level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{i}}\,\!&amp;lt;/math&amp;gt; is the relative humidity level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the   interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;A,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\phi ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \beta }=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial A}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \phi }=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial b}=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===T-H Weibull Example===&lt;br /&gt;
{{:T-H_Example}}&lt;br /&gt;
&lt;br /&gt;
=T-H Lognormal=&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the lognormal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\overline{{{T}&#039;}}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{T}&#039;=\ln (T)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
T=\text{times-to-failure}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\overline{{{T}&#039;}}=\,\!&amp;lt;/math&amp;gt; mean of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\,\!&amp;lt;/math&amp;gt; standard deviation of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
The median of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}={{e}^{{{\overline{T}}^{\prime }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The T-H lognormal model &#039;&#039;pdf&#039;&#039; can be obtained first by setting &amp;lt;math&amp;gt;\breve{T} =L(V,U)\,\!&amp;lt;/math&amp;gt;. &amp;lt;br&amp;gt;&lt;br /&gt;
Therefore:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=L(V,U)=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{e}^{{{\overline{T}}^{\prime }}}}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\overline{T}}^{\prime }}=\ln (A)+\frac{\phi }{V}+\frac{b}{U}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting the above equation into the lognormal &#039;&#039;pdf&#039;&#039; yields the T-H lognormal model &#039;&#039;pdf&#039;&#039; or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T,V,U)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (A)-\tfrac{\phi }{V}-\tfrac{b}{U}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T-H Lognormal Statistical Properties Summary==&lt;br /&gt;
===The Mean===&lt;br /&gt;
&lt;br /&gt;
*The mean life of the T-H lognormal model (mean of the times-to-failure), &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \bar{T}= &amp;amp; {{e}^{\bar{{T}&#039;}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}} =\  {{e}^{\ln (A)+\tfrac{\phi }{V}+\tfrac{b}{U}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The mean of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\bar{T}}^{^{\prime }}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{T}}^{\prime }}=\ln \left( {\bar{T}} \right)-\frac{1}{2}\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Standard Deviation===&lt;br /&gt;
*The standard deviation of the T-H lognormal model (standard deviation of the times-to-failure), &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma }_{T}}= &amp;amp; \sqrt{\left( {{e}^{2\bar{{T}&#039;}+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)} =\ &amp;amp; \sqrt{\left( {{e}^{2\left( \ln (A)+\tfrac{\phi }{V}+\tfrac{b}{U} \right)+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The standard deviation of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\sqrt{\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Mode===&lt;br /&gt;
*The mode of the T-H lognormal model is given by: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
&amp;amp; \tilde{T}= &amp;amp; {{e}^{{{\overline{T}}^{\prime }}-\sigma _{{{T}&#039;}}^{2}}}=\ &amp;amp; {{e}^{\ln (A)+\tfrac{\phi }{V}+\tfrac{b}{U}-\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
	\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-H Lognormal Reliability===&lt;br /&gt;
The reliability for a mission of time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, starting at age 0, for the T-H lognormal model is determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V,U)=\int_{T}^{\infty }f(t,V,U)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V,U)=\int_{{{T}^{^{\prime }}}}^{\infty }\frac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\ln (A)-\tfrac{\phi }{V}-\tfrac{b}{U}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no closed form solution for the lognormal reliability function. Solutions can be obtained via the use of standard normal tables. Since the application automatically solves for the reliability, we will not discuss manual solution methods.&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
For the T-H lognormal model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is estimated by first solving the reliability equation with respect to time, as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;T_{R}^{\prime }=\ln (A)+\frac{\phi }{V}+\frac{b}{U}+z\cdot {{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z={{\Phi }^{-1}}\left[ F\left( T_{R}^{\prime },V,U \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;,V,U)}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;{T}&#039;=\ln (T),\,\!&amp;lt;/math&amp;gt; the reliable life, &amp;lt;math&amp;gt;{{t}_{R,}}\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}={{e}^{T_{R}^{\prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-H Lognormal Failure Rate===&lt;br /&gt;
The lognormal failure rate is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (T,V,U)=\frac{f(T,V,U)}{R(T,V,U)}=\frac{\tfrac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (A)-\tfrac{\phi }{V}-\tfrac{b}{U}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}}{\int_{{{T}&#039;}}^{\infty }\tfrac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (A)-\tfrac{\phi }{V}-\tfrac{b}{U}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
The complete T-H lognormal log-likelihood function is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{{{\sigma }_{{{T}&#039;}}}{{T}_{i}}}{{\phi }_{pdf}}\left( \frac{\ln \left( {{T}_{i}} \right)-\ln (A)-\tfrac{\phi }{{{V}_{i}}}-\tfrac{b}{{{U}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] \text{ }+\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\ln \left[ 1-\Phi \left( \frac{\ln \left( T_{i}^{\prime } \right)-\ln (A)-\tfrac{\phi }{{{V}_{i}}}-\tfrac{b}{{{U}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Li}^{\prime \prime }=\frac{\ln T_{Li}^{\prime \prime }-\ln A-\tfrac{\phi }{{{V}_{i}}}-\tfrac{b}{U_{i}^{\prime \prime }}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Ri}^{\prime \prime }=\frac{\ln T_{Ri}^{\prime \prime }-\ln A-\tfrac{\phi }{{{V}_{i}}}-\tfrac{b}{U_{i}^{\prime \prime }}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\phi }_{pdf}}\left( x \right)=\frac{1}{\sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( x \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; is the standard deviation of the natural logarithm of the times-to-failure (unknown, the first of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the first T-H parameter (unknown, the second of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; is the second T-H parameter (unknown, the third of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; is the third T-H parameter (unknown, the fourth of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level for the first stress type (i.e., temperature) of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level for the second stress type (i.e., relative humidity) of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.	&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{\phi },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial A}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \phi }=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial b}=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= T-H Confidence Bounds =&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==Approximate Confidence Bounds for the T-H Exponential==&lt;br /&gt;
===Confidence Bounds on the Mean Life===&lt;br /&gt;
The mean life for the T-H exponential distribution is given by Eqn. (Temp-Hum) by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt;. The upper &amp;lt;math&amp;gt;({{m}_{U}})\,\!&amp;lt;/math&amp;gt; and lower &amp;lt;math&amp;gt;({{m}_{L}})\,\!&amp;lt;/math&amp;gt; bounds on the mean life (ML estimate of the mean life) are estimated by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{U}}=\widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{L}}=\widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level, then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds, and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds. The variance of &amp;lt;math&amp;gt;\widehat{m}\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{m})=\ &amp;amp; {{\left( \frac{\partial m}{\partial A} \right)}^{2}}Var(\widehat{A})+{{\left( \frac{\partial m}{\partial \phi } \right)}^{2}}Var(\widehat{\phi }) +{{\left( \frac{\partial m}{\partial b} \right)}^{2}}Var(\widehat{b}) +2\left( \frac{\partial m}{\partial A} \right)\left( \frac{\partial m}{\partial \phi } \right)Cov(\widehat{A},\widehat{\phi }) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial m}{\partial A} \right)\left( \frac{\partial m}{\partial b} \right)Cov(\widehat{A},\widehat{b}) +2\left( \frac{\partial m}{\partial A} \right)\left( \frac{\partial m}{\partial \phi } \right)Cov(\widehat{\phi },\widehat{b})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{m})=\ &amp;amp; {{e}^{2\left( \tfrac{\widehat{\phi }}{V}+\tfrac{\widehat{b}}{U} \right)}}[Var(\widehat{A})+\frac{{{\widehat{A}}^{2}}}{{{V}^{2}}}Var(\widehat{\phi }) +\frac{{{\widehat{A}}^{2}}}{{{U}^{2}}}Var(\widehat{b}) \\ &lt;br /&gt;
 &amp;amp; +\frac{2\widehat{A}}{V}Cov(\widehat{A},\widehat{\phi })+\frac{2\widehat{A}}{U}Cov(\widehat{A},\widehat{b}) +\frac{2{{\widehat{A}}^{2}}}{V\cdot U}Cov(\widehat{\phi },\widehat{b})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariance of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;,\widehat{\phi })\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{A}) &amp;amp; Cov(\widehat{A},\widehat{\phi }) &amp;amp; Cov(\widehat{A},\widehat{b})  \\&lt;br /&gt;
   Cov(\widehat{\phi },\widehat{A}) &amp;amp; Var(\widehat{\phi }) &amp;amp; Cov(\widehat{\phi },\widehat{b})  \\&lt;br /&gt;
   Cov(\widehat{b},\widehat{A}) &amp;amp; Cov(\widehat{b},\widehat{\phi }) &amp;amp; Var(\widehat{b})  \\&lt;br /&gt;
\end{matrix} \right]=\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial \phi } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial b}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\phi }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial b}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial \phi } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{b}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]_{}^{-1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The bounds on reliability at a given time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, are estimated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{U}}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{L}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{T}=-\widehat{m}\cdot \ln (R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding confidence bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; -{{m}_{U}}\cdot \ln (R) \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; -{{m}_{L}}\cdot \ln (R)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the T-H Weibull==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
&lt;br /&gt;
Using the same approach as previously discussed ( &amp;lt;math&amp;gt;\widehat{\beta }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{A}\,\!&amp;lt;/math&amp;gt; positive parameters):&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\beta }_{U}}= &amp;amp; \widehat{\beta }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{\beta }_{L}}= &amp;amp; \widehat{\beta }\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{A}_{U}}= &amp;amp; \widehat{A}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{A})}}{\widehat{A}}}} \\ &lt;br /&gt;
 &amp;amp; {{A}_{L}}= &amp;amp; \widehat{A}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{A})}}{\widehat{A}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{b}_{U}}= &amp;amp; \widehat{b}+{{K}_{\alpha }}\sqrt{Var(\widehat{b})} \\ &lt;br /&gt;
 &amp;amp; {{b}_{L}}= &amp;amp; \widehat{b}-{{K}_{\alpha }}\sqrt{Var(\widehat{b})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\phi }_{U}}= &amp;amp; \widehat{\phi }+{{K}_{\alpha }}\sqrt{Var(\widehat{\phi })} \\ &lt;br /&gt;
 &amp;amp; {{\phi }_{L}}= &amp;amp; \widehat{\phi }-{{K}_{\alpha }}\sqrt{Var(\widehat{\phi })}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;b,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{b},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{\phi })\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{\beta }) &amp;amp; Cov(\widehat{\beta },\widehat{A}) &amp;amp; Cov(\widehat{\beta },\widehat{b}) &amp;amp; Cov(\widehat{\beta },\widehat{\phi })  \\&lt;br /&gt;
   Cov(\widehat{A},\widehat{\beta }) &amp;amp; Var(\widehat{A}) &amp;amp; Cov(\widehat{A},\widehat{b}) &amp;amp; Cov(\widehat{A},\widehat{\phi })  \\&lt;br /&gt;
   Cov(\widehat{b},\widehat{\beta }) &amp;amp; Cov(\widehat{b},\widehat{A}) &amp;amp; Var(\widehat{b}) &amp;amp; Cov(\widehat{b},\widehat{\phi })  \\&lt;br /&gt;
   Cov(\widehat{\phi },\widehat{\beta }) &amp;amp; Cov(\widehat{\phi },\widehat{A}) &amp;amp; Cov(\widehat{\phi },\widehat{b}) &amp;amp; Var(\widehat{\phi })  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ F \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F=\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\beta }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial b} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial \phi }  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial b} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial \phi }  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{b}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial \phi }  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial b} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\phi }^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The reliability function (ML estimate) for the T-H Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V,U)={{e}^{-{{\left( \tfrac{T}{\widehat{A}}{{e}^{-\left( \tfrac{\widehat{\phi }}{V}+\tfrac{\widehat{b}}{U} \right)}} \right)}^{\widehat{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V,U)={{e}^{-{{e}^{\ln \left[ {{\left( \tfrac{T}{\widehat{A}}{{e}^{-\left( \tfrac{\widehat{\phi }}{V}+\tfrac{\widehat{b}}{U} \right)}} \right)}^{\widehat{\beta }}} \right]}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Setting: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\ln \left[ {{\left( \frac{T}{\widehat{A}}{{e}^{-\left( \tfrac{\widehat{\phi }}{V}+\tfrac{\widehat{b}}{U} \right)}} \right)}^{\widehat{\beta }}} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\widehat{\beta }\left[ \ln (T)-\ln (\widehat{A})-\frac{\widehat{\phi }}{V}-\frac{\widehat{b}}{U} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V,U)={{e}^{-{{e}^{\widehat{u}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to find the upper and lower bounds on &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{u}}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{u}}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial A} \right)}^{2}}Var(\widehat{A}) +{{\left( \frac{\partial \widehat{u}}{\partial b} \right)}^{2}}Var(\widehat{b})+{{\left( \frac{\partial \widehat{u}}{\partial \phi } \right)}^{2}}Var(\widehat{\phi }) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial A} \right)Cov(\widehat{\beta },\widehat{A}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial b} \right)Cov(\widehat{\beta },\widehat{b}) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial \phi } \right)Cov(\widehat{\beta },\widehat{\phi }) +2\left( \frac{\partial \widehat{u}}{\partial A} \right)\left( \frac{\partial \widehat{u}}{\partial b} \right)Cov(\widehat{A},\widehat{b}) +2\left( \frac{\partial \widehat{u}}{\partial A} \right)\left( \frac{\partial \widehat{u}}{\partial \phi } \right)Cov(\widehat{A},\widehat{\phi }) +2\left( \frac{\partial \widehat{u}}{\partial b} \right)\left( \frac{\partial \widehat{u}}{\partial \phi } \right)Cov(\widehat{b},\widehat{\phi })  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; {{\left( \frac{\widehat{u}}{\widehat{\beta }} \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\widehat{\beta }}{\widehat{A}} \right)}^{2}}Var(\widehat{A}) +{{\left( \frac{\widehat{\beta }}{U} \right)}^{2}}Var(\widehat{b})+{{\left( \frac{\widehat{\beta }}{V} \right)}^{2}}Var(\widehat{\phi }) -\frac{2\widehat{u}}{\widehat{A}}Cov(\widehat{\beta },\widehat{A})-\frac{2\widehat{u}}{U}Cov(\widehat{\beta },\widehat{b})-\frac{2\widehat{u}}{V}Cov(\widehat{\beta },\widehat{\phi }) \\ &lt;br /&gt;
 &amp;amp;  +\frac{2{{\widehat{\beta }}^{2}}}{\widehat{A}U}Cov(\widehat{A},\widehat{b})+\frac{2{{\widehat{\beta }}^{2}}}{\widehat{A}V}Cov(\widehat{A},\widehat{\phi }) +\frac{2{{\widehat{\beta }}^{2}}}{UV}Cov(\widehat{\phi },\widehat{b})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{L}} \right)}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{U}} \right)}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (R)=\ &amp;amp; -{{\left( \frac{\widehat{T}}{\widehat{A}}{{e}^{-\left( \tfrac{\widehat{\phi }}{V}+\tfrac{\widehat{b}}{U} \right)}} \right)}^{\widehat{\beta }}} \\ &lt;br /&gt;
 \ln (-\ln (R))=\ &amp;amp; \widehat{\beta }\left( \ln \widehat{T}-\ln \widehat{A}-\frac{\widehat{\phi }}{V}-\frac{\widehat{b}}{U} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\frac{1}{\widehat{\beta }}\ln (-\ln (R))+\ln \widehat{A}+\frac{\widehat{\phi }}{V}+\frac{\widehat{b}}{U}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\widehat{u}=\ln \widehat{T}.\,\!&amp;lt;/math&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial A} \right)}^{2}}Var(\widehat{A}) +{{\left( \frac{\partial \widehat{u}}{\partial b} \right)}^{2}}Var(\widehat{b})+{{\left( \frac{\partial \widehat{u}}{\partial \phi } \right)}^{2}}Var(\widehat{\phi }) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial A} \right)Cov(\widehat{\beta },\widehat{A}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial b} \right)Cov(\widehat{\beta },\widehat{b}) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial \phi } \right)Cov(\widehat{\beta },\widehat{\phi }) +2\left( \frac{\partial \widehat{u}}{\partial A} \right)\left( \frac{\partial \widehat{u}}{\partial b} \right)Cov(\widehat{A},\widehat{b}) +2\left( \frac{\partial \widehat{u}}{\partial A} \right)\left( \frac{\partial \widehat{u}}{\partial \phi } \right)Cov(\widehat{A},\widehat{\phi }) +2\left( \frac{\partial \widehat{u}}{\partial b} \right)\left( \frac{\partial \widehat{u}}{\partial \phi } \right)Cov(\widehat{b},\widehat{\phi })  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; \frac{1}{{{\widehat{\beta }}^{4}}}{{\left[ \ln (-\ln (R)) \right]}^{2}}Var(\widehat{\beta })+\frac{1}{{{\widehat{A}}^{2}}}Var(\widehat{A}) +\frac{1}{{{U}^{2}}}Var(\widehat{b})+\frac{1}{{{V}^{2}}}Var(\widehat{\phi }) +\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}\widehat{A}}Cov(\widehat{\beta },\widehat{A}) -\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}U}Cov(\widehat{\beta },\widehat{b}) \\ &lt;br /&gt;
 &amp;amp; -\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}V}Cov(\widehat{\beta },\widehat{\phi }) +\frac{2}{\widehat{A}U}Cov(\widehat{A},\widehat{b}) +\frac{2}{\widehat{A}V}Cov(\widehat{A},\widehat{\phi }) +\frac{2}{VU}Cov(\widehat{b},\widehat{\phi })  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on time are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{{{u}_{U}}}} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{{{u}_{L}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the T-H Lognormal==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
Since the standard deviation, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\widehat{A}\,\!&amp;lt;/math&amp;gt; are positive parameters, &amp;lt;math&amp;gt;\ln ({{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\ln (\widehat{A})\,\!&amp;lt;/math&amp;gt; are treated as normally distributed and the bounds are estimated from: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{\sigma }_{U}}=\ &amp;amp; {{\widehat{\sigma }}_{{{T}&#039;}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}&amp;amp;\text{ (Upper bound)} \\ &lt;br /&gt;
 {{\sigma }_{L}}=\ &amp;amp; \frac{{{\widehat{\sigma }}_{{{T}&#039;}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}}&amp;amp;\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{A}_{U}}=\ &amp;amp; \widehat{A}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{A})}}{\widehat{A}}}}&amp;amp;\text{ (Upper bound)} \\ &lt;br /&gt;
 {{A}_{L}}=\ &amp;amp; \frac{\widehat{A}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{A})}}{\widehat{A}}}}}&amp;amp;\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The lower and upper bounds on &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\phi }_{U}}= &amp;amp; \widehat{\phi }+{{K}_{\alpha }}\sqrt{Var(\widehat{\phi })}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{\phi }_{L}}= &amp;amp; \widehat{\phi }-{{K}_{\alpha }}\sqrt{Var(\widehat{\phi })}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{b}_{U}}= &amp;amp; \widehat{b}+{{K}_{\alpha }}\sqrt{Var(\widehat{b})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{b}_{L}}= &amp;amp; \widehat{b}-{{K}_{\alpha }}\sqrt{Var(\widehat{b})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\phi ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;b,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{A}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{\phi },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}}),\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left( \begin{matrix}&lt;br /&gt;
   Var\left( {{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{\phi },{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{A} \right) &amp;amp; Var\left( \widehat{A} \right) &amp;amp; Cov\left( \widehat{A},\widehat{\phi } \right) &amp;amp; Cov\left( \widehat{A},\widehat{b} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{\phi } \right) &amp;amp; Cov\left( \widehat{\phi },\widehat{A} \right) &amp;amp; Var\left( \widehat{\phi } \right) &amp;amp; Cov\left( \widehat{\phi },\widehat{b} \right)  \\&lt;br /&gt;
   Cov\left( \widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{b},\widehat{A} \right) &amp;amp; Cov\left( \widehat{b},\widehat{\phi } \right) &amp;amp; Var\left( \widehat{b} \right)  \\&lt;br /&gt;
\end{matrix} \right)={{F}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}^{-1}}={{\left( \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma _{{{T}&#039;}}^{2}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial \phi } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial b}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial \phi } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial b}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\phi }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial b}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial \phi } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{b}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right)}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bounds on Reliability===&lt;br /&gt;
The reliability of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({T}&#039;,V,U;A,\phi ,b,{{\sigma }_{{{T}&#039;}}})=\int_{{{T}&#039;}}^{\infty }\frac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\ln (\widehat{A})-\tfrac{\widehat{\phi }}{V}-\tfrac{\widehat{b}}{U}}{{{\widehat{\sigma }}_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\widehat{z}(t,V,U;A,\phi ,b,{{\sigma }_{T}})=\tfrac{t-\ln (\widehat{A})-\tfrac{\widehat{\phi }}{V}-\tfrac{\widehat{b}}{U}}{{{\widehat{\sigma }}_{{{T}&#039;}}}},\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;\tfrac{d\widehat{z}}{dt}=\tfrac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
For &amp;lt;math&amp;gt;t={T}&#039;\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{z}=\tfrac{{T}&#039;-\ln (\widehat{A})-\tfrac{\widehat{\phi }}{V}-\tfrac{\widehat{b}}{U}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}\,\!&amp;lt;/math&amp;gt;, and for &amp;lt;math&amp;gt;t=\infty ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{z}=\infty .\,\!&amp;lt;/math&amp;gt; The above equation then becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(\widehat{z})=\int_{\widehat{z}({T}&#039;,V,U)}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The bounds on &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{z}_{U}}= &amp;amp; \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ &lt;br /&gt;
 &amp;amp; {{z}_{L}}= &amp;amp; \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{z})=\ &amp;amp; \left( \frac{\partial \widehat{z}}{\partial A} \right)_{\widehat{A}}^{2}Var(\widehat{A})+\left( \frac{\partial \widehat{z}}{\partial \phi } \right)_{\widehat{\phi }}^{2}Var(\widehat{\phi }) +\left( \frac{\partial \widehat{z}}{\partial b} \right)_{\widehat{b}}^{2}Var(\widehat{b})+\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)_{{{\widehat{\sigma }}_{{{T}&#039;}}}}^{2}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2{{\left( \frac{\partial \widehat{z}}{\partial A} \right)}_{\widehat{A}}}{{\left( \frac{\partial \widehat{z}}{\partial \phi } \right)}_{\widehat{\phi }}}Cov\left( \widehat{A},\widehat{\phi } \right) +2{{\left( \frac{\partial \widehat{z}}{\partial A} \right)}_{\widehat{A}}}{{\left( \frac{\partial \widehat{z}}{\partial b} \right)}_{\widehat{b}}}Cov\left( \widehat{A},\widehat{b} \right) \\ &lt;br /&gt;
 &amp;amp;  +2{{\left( \frac{\partial \widehat{z}}{\partial \phi } \right)}_{\widehat{\phi }}}{{\left( \frac{\partial \widehat{z}}{\partial b} \right)}_{\widehat{b}}}Cov\left( \widehat{\phi },\widehat{b} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial A} \right)}_{\widehat{A}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial \phi } \right)}_{\widehat{\phi }}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{\phi },{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial b} \right)}_{\widehat{b}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{z})=\ &amp;amp; \frac{1}{\widehat{\sigma }_{{{T}&#039;}}^{2}}[\frac{1}{{{A}^{2}}}Var(\widehat{A})+\frac{1}{{{V}^{2}}}Var(\widehat{\phi })+\frac{1}{{{U}^{2}}}Var(\widehat{b})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2}{A\cdot V}Cov\left( \widehat{A},\widehat{\phi } \right)+\frac{2}{A\cdot U}Cov\left( \widehat{A},\widehat{b} \right) \\ &lt;br /&gt;
 &amp;amp; +\frac{2}{V\cdot U}Cov\left( \widehat{\phi },\widehat{b} \right)+\frac{2\widehat{z}}{A}Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +\frac{2\widehat{z}}{V}Cov\left( \widehat{\phi },{{\widehat{\sigma }}_{{{T}&#039;}}} \right)+\frac{2\widehat{z}}{U}Cov\left( \widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds around time, for a given lognormal percentile (unreliability), are estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{T}&#039;(V,U;\widehat{A},\widehat{\phi },\widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}})=\ln (\widehat{A})+\frac{\widehat{\phi }}{V}+\frac{\widehat{b}}{U}+z\cdot {{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {T}&#039;(V,U;\widehat{A},\widehat{\phi },\widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}})=\ &amp;amp; \ln (T) \\ &lt;br /&gt;
 z=\ &amp;amp; {{\Phi }^{-1}}\left[ F({T}&#039;) \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;)}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to calculate the variance of &amp;lt;math&amp;gt;{T}&#039;(V,U;\widehat{A},\widehat{\phi },\widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var({T}&#039;)=\ &amp;amp; {{\left( \frac{\partial {T}&#039;}{\partial A} \right)}^{2}}Var(\widehat{A})+{{\left( \frac{\partial {T}&#039;}{\partial \phi } \right)}^{2}}Var(\widehat{\phi }) +{{\left( \frac{\partial {T}&#039;}{\partial b} \right)}^{2}}Var(\widehat{b})+{{\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2\left( \frac{\partial {T}&#039;}{\partial A} \right)\left( \frac{\partial {T}&#039;}{\partial \phi } \right)Cov\left( \widehat{A},\widehat{\phi } \right) \\ &lt;br /&gt;
 &amp;amp;  +2\left( \frac{\partial {T}&#039;}{\partial A} \right)\left( \frac{\partial {T}&#039;}{\partial b} \right)Cov\left( \widehat{A},\widehat{b} \right) +2\left( \frac{\partial {T}&#039;}{\partial \phi } \right)\left( \frac{\partial {T}&#039;}{\partial b} \right)Cov\left( \widehat{\phi },\widehat{b} \right) +2\left( \frac{\partial {T}&#039;}{\partial A} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) \\ &lt;br /&gt;
 &amp;amp;  +2\left( \frac{\partial {T}&#039;}{\partial \phi } \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{\phi },{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2\left( \frac{\partial {T}&#039;}{\partial b} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var({T}&#039;)=\ &amp;amp; \frac{1}{{{A}^{2}}}Var(\widehat{A})+\frac{1}{{{V}^{2}}}Var(\widehat{\phi }) +\frac{1}{{{U}^{2}}}Var(\widehat{b})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2}{A\cdot V}Cov\left( \widehat{A},\widehat{\phi } \right)+\frac{2}{A\cdot U}Cov\left( \widehat{A},\widehat{b} \right) \\ &lt;br /&gt;
 &amp;amp; +\frac{2}{V\cdot U}Cov\left( \widehat{\phi },\widehat{b} \right)+\frac{2\widehat{z}}{A}Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +\frac{2\widehat{z}}{V}Cov\left( \widehat{\phi },{{\widehat{\sigma }}_{{{T}&#039;}}} \right)+\frac{2\widehat{z}}{U}Cov\left( \widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; T_{U}^{\prime }= &amp;amp; \ln {{T}_{U}}={T}&#039;+{{K}_{\alpha }}\sqrt{Var({T}&#039;)} \\ &lt;br /&gt;
 &amp;amp; T_{L}^{\prime }= &amp;amp; \ln {{T}_{L}}={T}&#039;-{{K}_{\alpha }}\sqrt{Var({T}&#039;)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for &amp;lt;math&amp;gt;{{T}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{T}_{L}}\,\!&amp;lt;/math&amp;gt; yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{T_{U}^{\prime }}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{T_{L}^{\prime }}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Temperature-Humidity_Relationship&amp;diff=64931</id>
		<title>Temperature-Humidity Relationship</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Temperature-Humidity_Relationship&amp;diff=64931"/>
		<updated>2017-02-08T21:11:56Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional Reliability */ changed R(T,t,V,U) to R((t|T),V,U)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|7}}&lt;br /&gt;
The Temperature-Humidity (T-H) relationship, a variation of the Eyring relationship, has been proposed for predicting the life at use conditions when temperature and humidity are the accelerated stresses in a test. This combination model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V,U)=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; is one of the three parameters to be determined.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; is the second of the three parameters to be determined (also known as the activation energy for humidity).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is a constant and the third of the three parameters to be determined.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; is the relative humidity  (decimal or percentage).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; is temperature (&#039;&#039;&#039;in absolute units&#039;&#039;&#039;). &lt;br /&gt;
&lt;br /&gt;
The T-H relationship can be linearized and plotted on a Life vs. Stress plot. The relationship is linearized by taking the natural logarithm of both sides in the T-H relationship, or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;ln(L(V,U))=ln(A)+\frac{\phi }{V}+\frac{b}{U}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since life is now a function of two stresses, a Life vs. Stress plot can only be obtained by keeping one of the two stresses constant and varying the other one. Doing so will yield a straight line where the term for the stress which is kept at a fixed value becomes another constant (in addition to the &amp;lt;math&amp;gt;\ln (A)\,\!&amp;lt;/math&amp;gt; constant). In the next two figures, data obtained from a temperature and humidity test were analyzed and plotted on Arrhenius paper. In the first figure, life is plotted versus temperature with relative humidity held at a fixed value. In the second figure, life is plotted versus relative humidity with temperature held at a fixed value.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA9.1.png|center|400px|Life vs. Temperature plot at a fixed relative humidity.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA9.2.png|center|400px|Life vs. Relative Humidity plot at a fixed temperature.]]&lt;br /&gt;
&lt;br /&gt;
Note that the Life vs. Stress plots are plotted on a log-reciprocal scale. Also note that the points shown in these plots represent the life characteristics at the test stress levels (the data set was fitted to a Weibull distribution, thus the points represent the scale parameter, &amp;lt;math&amp;gt;\eta )\,\!&amp;lt;/math&amp;gt;. For example, the points shown in the first figure represent &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; at each of the test temperature levels (two temperature levels were considered in this test).&lt;br /&gt;
&lt;br /&gt;
===A look at the Parameters Phi and b===&lt;br /&gt;
Depending on which stress type is kept constant, it can be seen from the linearized T-H relationship that either the parameter &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; or the parameter &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; is the slope of the resulting line. If, for example, the humidity is kept constant then &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; is the slope of the life line in a Life vs. Temperature plot. The steeper the slope, the greater the dependency of product life to the temperature. In other words, &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; is a measure of the effect that temperature has on the life, and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; is a measure of the effect that relative humidity has on the life. The larger the value of &amp;lt;math&amp;gt;\phi ,\,\!&amp;lt;/math&amp;gt; the higher the dependency of the life on the temperature. Similarly, the larger the value of &amp;lt;math&amp;gt;b,\,\!&amp;lt;/math&amp;gt; the higher the dependency of the life on the humidity.&lt;br /&gt;
&lt;br /&gt;
===T-H Data===&lt;br /&gt;
When using the T-H relationship, the effect of both temperature and humidity on life is sought. For this reason, the test must be performed in a combination manner between the different stress levels of the two stress types. For example, assume that an accelerated test is to be performed at two temperature and two humidity levels. The two temperature levels were chosen to be 300K and 343K. The two humidity levels were chosen to be 0.6 and 0.8. It would be wrong to perform the test at (300K, 0.6) and (343K, 0.8). Doing so would not provide information about the temperature-humidity effects on life. This is because both stresses are increased at the same time and therefore it is unknown which stress is causing the acceleration on life. A possible combination that would provide information about temperature-humidity effects on life would be (300K, 0.6), (300K, 0.8) and (343K, 0.8). It is clear that by testing at (300K, 0.6) and (300K, 0.8) the effect of humidity on life can be determined (since temperature remained constant). Similarly the effects of temperature on life can be determined by testing at (300K, 0.8) and (343K, 0.8) since humidity remained constant.&lt;br /&gt;
&lt;br /&gt;
===Acceleration Factor===&lt;br /&gt;
The acceleration factor for the T-H relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{F}}=\frac{{{L}_{USE}}}{{{L}_{Accelerated}}}=\frac{A{{e}^{\tfrac{\phi }{{{V}_{u}}}+\tfrac{b}{{{U}_{u}}}}}}{A{{e}^{\tfrac{\phi }{{{V}_{A}}}+\tfrac{b}{{{U}_{A}}}}}}={{e}^{\phi \left( \tfrac{1}{{{V}_{u}}}-\tfrac{1}{{{V}_{A}}} \right)+b\left( \tfrac{1}{{{U}_{u}}}-\tfrac{1}{{{U}_{A}}} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{L}_{USE}}\,\!&amp;lt;/math&amp;gt; is the life at use stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{L}_{Accelerated}}\,\!&amp;lt;/math&amp;gt; is the life at the accelerated stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{u}}\,\!&amp;lt;/math&amp;gt; is the use temperature level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{A}}\,\!&amp;lt;/math&amp;gt; is the accelerated temperature level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{A}}\,\!&amp;lt;/math&amp;gt; is the accelerated humidity level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{u}}\,\!&amp;lt;/math&amp;gt; is the use humidity level.&lt;br /&gt;
&lt;br /&gt;
The acceleration Factor is plotted versus stress in the same manner used to create the Life vs. Stress plots. That is, one stress type is kept constant and the other is varied as shown in the next two figures.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA9.3.png|center|400px|Acceleration Factor vs. Temperature at a fixed relative humidity.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA9.4.png|center|400px|Acceleration Factor vs. Humidity at a fixed temperature.]]&lt;br /&gt;
&lt;br /&gt;
=T-H Exponential=&lt;br /&gt;
By setting &amp;lt;math&amp;gt;m=L(U,V)\,\!&amp;lt;/math&amp;gt; in the exponential &#039;&#039;pdf&#039;&#039; we can obtain the T-H exponential &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V,U)=\frac{1}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}\cdot {{e}^{-\tfrac{t}{A}\cdot {{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T-H Exponential Statistical Properties Summary==&lt;br /&gt;
===Mean or MTTF===&lt;br /&gt;
&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T},\,\!&amp;lt;/math&amp;gt; or Mean Time To Failure (MTTF) for the T-H exponential model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=\int_{0}^{\infty }t\cdot f(t,V,U)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting the T-H exponential &#039;&#039;pdf&#039;&#039; equation yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \overline{T}= &amp;amp; \int_{0}^{\infty }t\cdot \frac{1}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}{{e}^{-\tfrac{t}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}dt =\ &amp;amp; A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Median===&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; for the T-H exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=0.693\cdot A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode===&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; for the T-H exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Standard Deviation===&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, for the T-H exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-H Exponential Reliability Function===&lt;br /&gt;
The T-H exponential reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V,U)={{e}^{-\tfrac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function is the complement of the T-H exponential cumulative distribution function or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V,U)=1-Q(T,V,U)=1-\int_{0}^{T}f(T)dT\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V,U)=1-\int_{0}^{T}\frac{1}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}{{e}^{-\tfrac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}dT={{e}^{-\tfrac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability===&lt;br /&gt;
The conditional reliability function for the T-H exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V,U)=\frac{R(T+t,V,U)}{R(T,V,U)}=\frac{{{e}^{-\lambda (T+t)}}}{{{e}^{-\lambda T}}}={{e}^{-\tfrac{t}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
For the T-H exponential model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({{t}_{R}},V,U)={{e}^{-\tfrac{{{t}_{R}}}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln [R({{t}_{R}},V,U)]=-\frac{{{t}_{R}}}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=-A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\ln [R({{t}_{R}},V,U)]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
&lt;br /&gt;
Substituting the T-H model into the exponential log-likelihood equation yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}}\cdot {{e}^{-\tfrac{{{T}_{i}}}{A}\cdot {{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{T_{i}^{\prime }}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-\tfrac{T_{Li}^{\prime \prime }}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{U_{i}^{\prime \prime }} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-\tfrac{T_{Ri}^{\prime \prime }}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{U_{i}^{\prime \prime }} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the T-H parameter (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; is the second T-H parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; is the third T-H parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the temperature level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{i}}\,\!&amp;lt;/math&amp;gt; is the relative humidity level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime}\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;A,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial A}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \phi }=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial b}=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=T-H Weibull=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: T-H_Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
By setting &amp;lt;math&amp;gt;\eta =L(U,V)\,\!&amp;lt;/math&amp;gt; in the Weibull &#039;&#039;pdf&#039;&#039;, the T--H Weibull model&#039;s &#039;&#039;pdf&#039;&#039; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\,V,\,U)=\frac{\beta }{A}{{e}^{-\left( \tfrac{\varphi }{V}+\tfrac{b}{U} \right)}}{{\left( \frac{t}{A}{{e}^{-\left( \tfrac{\varphi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{t}{A}{{e}^{-\left( \tfrac{\varphi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T-H Weibull Statistical Properties Summary==&lt;br /&gt;
===Mean or MTTF===&lt;br /&gt;
&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T}\,\!&amp;lt;/math&amp;gt; (also called &amp;lt;math&amp;gt;MTTF\,\!&amp;lt;/math&amp;gt; ), of the T-H Weibull model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\cdot \Gamma \left( \frac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Gamma \left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt; is the gamma function evaluated at the value of &amp;lt;math&amp;gt;\left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Median===&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; of the T-H Weibull model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}{{\left( \ln 2 \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode===&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; of the T-H Weibull model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}{{\left( 1-\frac{1}{\beta } \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Standard Deviation===&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}},\,\!&amp;lt;/math&amp;gt; of the T-H Weibull model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\cdot \sqrt{\Gamma \left( \frac{2}{\beta }+1 \right)-{{\left( \Gamma \left( \frac{1}{\beta }+1 \right) \right)}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-H Weibull Reliability Function===&lt;br /&gt;
The T-H Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V,U)={{e}^{-{{\left( \tfrac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability Function===&lt;br /&gt;
The T-H Weibull conditional reliability function at a specified stress level is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,t,V,U)=\frac{R(T+t,V,U)}{R(T,V,U)}=\frac{{{e}^{-{{\left( \tfrac{T+t}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta }}}}}{{{e}^{-{{\left( \tfrac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,t,V,U)={{e}^{-\left[ {{\left( \tfrac{T+t}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta }}-{{\left( \tfrac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta }} \right]}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
For the T-H Weibull model, the reliable life, &amp;lt;math&amp;gt;{{t}_{R}}\,\!&amp;lt;/math&amp;gt;, of a unit for a specified reliability and starting the mission at age zero is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}{{\left\{ -\ln \left[ R\left( {{T}_{R}},V,U \right) \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-H Weibull Failure Rate Function===&lt;br /&gt;
The T-H Weibull failure rate function, &amp;lt;math&amp;gt;\lambda (T,V,U)\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda \left( T,V,U \right)=\frac{f\left( T,V,U \right)}{R\left( T,V,U \right)}=\frac{\beta }{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}}{{\left( \frac{T}{A}{{e}^{-\left( \tfrac{\phi }{V}+\tfrac{b}{U} \right)}} \right)}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
Substituting the T-H model into the Weibull log-likelihood function yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{\beta }{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}}{{\left( \frac{{{T}_{i}}}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}} \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{{{T}_{i}}}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}} \right)}^{\beta }}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( \frac{T_{i}^{\prime }}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{{{U}_{i}}} \right)}} \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-{{\left( \tfrac{T_{Li}^{\prime \prime }}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{U_{i}^{\prime \prime }} \right)}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-{{\left( \tfrac{T_{Ri}^{\prime \prime }}{A}{{e}^{-\left( \tfrac{\phi }{{{V}_{i}}}+\tfrac{b}{U_{i}^{\prime \prime }} \right)}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter (unknown, the first of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the T-H parameter (unknown, the second of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; is the second T-H parameter (unknown, the third of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; is the third T-H parameter (unknown, the fourth of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the temperature level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{i}}\,\!&amp;lt;/math&amp;gt; is the relative humidity level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the   interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;A,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\phi ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \beta }=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial A}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \phi }=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial b}=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===T-H Weibull Example===&lt;br /&gt;
{{:T-H_Example}}&lt;br /&gt;
&lt;br /&gt;
=T-H Lognormal=&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the lognormal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\overline{{{T}&#039;}}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{T}&#039;=\ln (T)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
T=\text{times-to-failure}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\overline{{{T}&#039;}}=\,\!&amp;lt;/math&amp;gt; mean of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\,\!&amp;lt;/math&amp;gt; standard deviation of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
The median of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}={{e}^{{{\overline{T}}^{\prime }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The T-H lognormal model &#039;&#039;pdf&#039;&#039; can be obtained first by setting &amp;lt;math&amp;gt;\breve{T} =L(V,U)\,\!&amp;lt;/math&amp;gt;. &amp;lt;br&amp;gt;&lt;br /&gt;
Therefore:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=L(V,U)=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{e}^{{{\overline{T}}^{\prime }}}}=A{{e}^{\tfrac{\phi }{V}+\tfrac{b}{U}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\overline{T}}^{\prime }}=\ln (A)+\frac{\phi }{V}+\frac{b}{U}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting the above equation into the lognormal &#039;&#039;pdf&#039;&#039; yields the T-H lognormal model &#039;&#039;pdf&#039;&#039; or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T,V,U)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (A)-\tfrac{\phi }{V}-\tfrac{b}{U}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T-H Lognormal Statistical Properties Summary==&lt;br /&gt;
===The Mean===&lt;br /&gt;
&lt;br /&gt;
*The mean life of the T-H lognormal model (mean of the times-to-failure), &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \bar{T}= &amp;amp; {{e}^{\bar{{T}&#039;}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}} =\  {{e}^{\ln (A)+\tfrac{\phi }{V}+\tfrac{b}{U}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The mean of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\bar{T}}^{^{\prime }}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{T}}^{\prime }}=\ln \left( {\bar{T}} \right)-\frac{1}{2}\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Standard Deviation===&lt;br /&gt;
*The standard deviation of the T-H lognormal model (standard deviation of the times-to-failure), &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma }_{T}}= &amp;amp; \sqrt{\left( {{e}^{2\bar{{T}&#039;}+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)} =\ &amp;amp; \sqrt{\left( {{e}^{2\left( \ln (A)+\tfrac{\phi }{V}+\tfrac{b}{U} \right)+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The standard deviation of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\sqrt{\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Mode===&lt;br /&gt;
*The mode of the T-H lognormal model is given by: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
&amp;amp; \tilde{T}= &amp;amp; {{e}^{{{\overline{T}}^{\prime }}-\sigma _{{{T}&#039;}}^{2}}}=\ &amp;amp; {{e}^{\ln (A)+\tfrac{\phi }{V}+\tfrac{b}{U}-\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
	\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-H Lognormal Reliability===&lt;br /&gt;
The reliability for a mission of time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, starting at age 0, for the T-H lognormal model is determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V,U)=\int_{T}^{\infty }f(t,V,U)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V,U)=\int_{{{T}^{^{\prime }}}}^{\infty }\frac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\ln (A)-\tfrac{\phi }{V}-\tfrac{b}{U}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no closed form solution for the lognormal reliability function. Solutions can be obtained via the use of standard normal tables. Since the application automatically solves for the reliability, we will not discuss manual solution methods.&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
For the T-H lognormal model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is estimated by first solving the reliability equation with respect to time, as follows: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;T_{R}^{\prime }=\ln (A)+\frac{\phi }{V}+\frac{b}{U}+z\cdot {{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z={{\Phi }^{-1}}\left[ F\left( T_{R}^{\prime },V,U \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;,V,U)}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;{T}&#039;=\ln (T),\,\!&amp;lt;/math&amp;gt; the reliable life, &amp;lt;math&amp;gt;{{t}_{R,}}\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}={{e}^{T_{R}^{\prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===T-H Lognormal Failure Rate===&lt;br /&gt;
The lognormal failure rate is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (T,V,U)=\frac{f(T,V,U)}{R(T,V,U)}=\frac{\tfrac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (A)-\tfrac{\phi }{V}-\tfrac{b}{U}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}}{\int_{{{T}&#039;}}^{\infty }\tfrac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (A)-\tfrac{\phi }{V}-\tfrac{b}{U}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
The complete T-H lognormal log-likelihood function is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{{{\sigma }_{{{T}&#039;}}}{{T}_{i}}}{{\phi }_{pdf}}\left( \frac{\ln \left( {{T}_{i}} \right)-\ln (A)-\tfrac{\phi }{{{V}_{i}}}-\tfrac{b}{{{U}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] \text{ }+\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\ln \left[ 1-\Phi \left( \frac{\ln \left( T_{i}^{\prime } \right)-\ln (A)-\tfrac{\phi }{{{V}_{i}}}-\tfrac{b}{{{U}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Li}^{\prime \prime }=\frac{\ln T_{Li}^{\prime \prime }-\ln A-\tfrac{\phi }{{{V}_{i}}}-\tfrac{b}{U_{i}^{\prime \prime }}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Ri}^{\prime \prime }=\frac{\ln T_{Ri}^{\prime \prime }-\ln A-\tfrac{\phi }{{{V}_{i}}}-\tfrac{b}{U_{i}^{\prime \prime }}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\phi }_{pdf}}\left( x \right)=\frac{1}{\sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( x \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; is the standard deviation of the natural logarithm of the times-to-failure (unknown, the first of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the first T-H parameter (unknown, the second of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; is the second T-H parameter (unknown, the third of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; is the third T-H parameter (unknown, the fourth of four parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level for the first stress type (i.e., temperature) of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{U}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level for the second stress type (i.e., relative humidity) of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.	&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{\phi },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial A}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \phi }=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial b}=0\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= T-H Confidence Bounds =&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==Approximate Confidence Bounds for the T-H Exponential==&lt;br /&gt;
===Confidence Bounds on the Mean Life===&lt;br /&gt;
The mean life for the T-H exponential distribution is given by Eqn. (Temp-Hum) by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt;. The upper &amp;lt;math&amp;gt;({{m}_{U}})\,\!&amp;lt;/math&amp;gt; and lower &amp;lt;math&amp;gt;({{m}_{L}})\,\!&amp;lt;/math&amp;gt; bounds on the mean life (ML estimate of the mean life) are estimated by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{U}}=\widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{L}}=\widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level, then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds, and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds. The variance of &amp;lt;math&amp;gt;\widehat{m}\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{m})=\ &amp;amp; {{\left( \frac{\partial m}{\partial A} \right)}^{2}}Var(\widehat{A})+{{\left( \frac{\partial m}{\partial \phi } \right)}^{2}}Var(\widehat{\phi }) +{{\left( \frac{\partial m}{\partial b} \right)}^{2}}Var(\widehat{b}) +2\left( \frac{\partial m}{\partial A} \right)\left( \frac{\partial m}{\partial \phi } \right)Cov(\widehat{A},\widehat{\phi }) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial m}{\partial A} \right)\left( \frac{\partial m}{\partial b} \right)Cov(\widehat{A},\widehat{b}) +2\left( \frac{\partial m}{\partial A} \right)\left( \frac{\partial m}{\partial \phi } \right)Cov(\widehat{\phi },\widehat{b})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{m})=\ &amp;amp; {{e}^{2\left( \tfrac{\widehat{\phi }}{V}+\tfrac{\widehat{b}}{U} \right)}}[Var(\widehat{A})+\frac{{{\widehat{A}}^{2}}}{{{V}^{2}}}Var(\widehat{\phi }) +\frac{{{\widehat{A}}^{2}}}{{{U}^{2}}}Var(\widehat{b}) \\ &lt;br /&gt;
 &amp;amp; +\frac{2\widehat{A}}{V}Cov(\widehat{A},\widehat{\phi })+\frac{2\widehat{A}}{U}Cov(\widehat{A},\widehat{b}) +\frac{2{{\widehat{A}}^{2}}}{V\cdot U}Cov(\widehat{\phi },\widehat{b})]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariance of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;,\widehat{\phi })\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{A}) &amp;amp; Cov(\widehat{A},\widehat{\phi }) &amp;amp; Cov(\widehat{A},\widehat{b})  \\&lt;br /&gt;
   Cov(\widehat{\phi },\widehat{A}) &amp;amp; Var(\widehat{\phi }) &amp;amp; Cov(\widehat{\phi },\widehat{b})  \\&lt;br /&gt;
   Cov(\widehat{b},\widehat{A}) &amp;amp; Cov(\widehat{b},\widehat{\phi }) &amp;amp; Var(\widehat{b})  \\&lt;br /&gt;
\end{matrix} \right]=\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial \phi } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial b}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\phi }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial b}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial \phi } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{b}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]_{}^{-1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The bounds on reliability at a given time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, are estimated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{U}}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{L}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{T}=-\widehat{m}\cdot \ln (R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding confidence bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; -{{m}_{U}}\cdot \ln (R) \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; -{{m}_{L}}\cdot \ln (R)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the T-H Weibull==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
&lt;br /&gt;
Using the same approach as previously discussed ( &amp;lt;math&amp;gt;\widehat{\beta }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{A}\,\!&amp;lt;/math&amp;gt; positive parameters):&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\beta }_{U}}= &amp;amp; \widehat{\beta }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{\beta }_{L}}= &amp;amp; \widehat{\beta }\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{A}_{U}}= &amp;amp; \widehat{A}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{A})}}{\widehat{A}}}} \\ &lt;br /&gt;
 &amp;amp; {{A}_{L}}= &amp;amp; \widehat{A}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{A})}}{\widehat{A}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{b}_{U}}= &amp;amp; \widehat{b}+{{K}_{\alpha }}\sqrt{Var(\widehat{b})} \\ &lt;br /&gt;
 &amp;amp; {{b}_{L}}= &amp;amp; \widehat{b}-{{K}_{\alpha }}\sqrt{Var(\widehat{b})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\phi }_{U}}= &amp;amp; \widehat{\phi }+{{K}_{\alpha }}\sqrt{Var(\widehat{\phi })} \\ &lt;br /&gt;
 &amp;amp; {{\phi }_{L}}= &amp;amp; \widehat{\phi }-{{K}_{\alpha }}\sqrt{Var(\widehat{\phi })}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;b,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{b},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{\phi })\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{\beta }) &amp;amp; Cov(\widehat{\beta },\widehat{A}) &amp;amp; Cov(\widehat{\beta },\widehat{b}) &amp;amp; Cov(\widehat{\beta },\widehat{\phi })  \\&lt;br /&gt;
   Cov(\widehat{A},\widehat{\beta }) &amp;amp; Var(\widehat{A}) &amp;amp; Cov(\widehat{A},\widehat{b}) &amp;amp; Cov(\widehat{A},\widehat{\phi })  \\&lt;br /&gt;
   Cov(\widehat{b},\widehat{\beta }) &amp;amp; Cov(\widehat{b},\widehat{A}) &amp;amp; Var(\widehat{b}) &amp;amp; Cov(\widehat{b},\widehat{\phi })  \\&lt;br /&gt;
   Cov(\widehat{\phi },\widehat{\beta }) &amp;amp; Cov(\widehat{\phi },\widehat{A}) &amp;amp; Cov(\widehat{\phi },\widehat{b}) &amp;amp; Var(\widehat{\phi })  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ F \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F=\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\beta }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial b} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial \phi }  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial b} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial \phi }  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{b}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial \phi }  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial b} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\phi }^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The reliability function (ML estimate) for the T-H Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V,U)={{e}^{-{{\left( \tfrac{T}{\widehat{A}}{{e}^{-\left( \tfrac{\widehat{\phi }}{V}+\tfrac{\widehat{b}}{U} \right)}} \right)}^{\widehat{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V,U)={{e}^{-{{e}^{\ln \left[ {{\left( \tfrac{T}{\widehat{A}}{{e}^{-\left( \tfrac{\widehat{\phi }}{V}+\tfrac{\widehat{b}}{U} \right)}} \right)}^{\widehat{\beta }}} \right]}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Setting: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\ln \left[ {{\left( \frac{T}{\widehat{A}}{{e}^{-\left( \tfrac{\widehat{\phi }}{V}+\tfrac{\widehat{b}}{U} \right)}} \right)}^{\widehat{\beta }}} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\widehat{\beta }\left[ \ln (T)-\ln (\widehat{A})-\frac{\widehat{\phi }}{V}-\frac{\widehat{b}}{U} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V,U)={{e}^{-{{e}^{\widehat{u}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to find the upper and lower bounds on &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{u}}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{u}}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial A} \right)}^{2}}Var(\widehat{A}) +{{\left( \frac{\partial \widehat{u}}{\partial b} \right)}^{2}}Var(\widehat{b})+{{\left( \frac{\partial \widehat{u}}{\partial \phi } \right)}^{2}}Var(\widehat{\phi }) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial A} \right)Cov(\widehat{\beta },\widehat{A}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial b} \right)Cov(\widehat{\beta },\widehat{b}) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial \phi } \right)Cov(\widehat{\beta },\widehat{\phi }) +2\left( \frac{\partial \widehat{u}}{\partial A} \right)\left( \frac{\partial \widehat{u}}{\partial b} \right)Cov(\widehat{A},\widehat{b}) +2\left( \frac{\partial \widehat{u}}{\partial A} \right)\left( \frac{\partial \widehat{u}}{\partial \phi } \right)Cov(\widehat{A},\widehat{\phi }) +2\left( \frac{\partial \widehat{u}}{\partial b} \right)\left( \frac{\partial \widehat{u}}{\partial \phi } \right)Cov(\widehat{b},\widehat{\phi })  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; {{\left( \frac{\widehat{u}}{\widehat{\beta }} \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\widehat{\beta }}{\widehat{A}} \right)}^{2}}Var(\widehat{A}) +{{\left( \frac{\widehat{\beta }}{U} \right)}^{2}}Var(\widehat{b})+{{\left( \frac{\widehat{\beta }}{V} \right)}^{2}}Var(\widehat{\phi }) -\frac{2\widehat{u}}{\widehat{A}}Cov(\widehat{\beta },\widehat{A})-\frac{2\widehat{u}}{U}Cov(\widehat{\beta },\widehat{b})-\frac{2\widehat{u}}{V}Cov(\widehat{\beta },\widehat{\phi }) \\ &lt;br /&gt;
 &amp;amp;  +\frac{2{{\widehat{\beta }}^{2}}}{\widehat{A}U}Cov(\widehat{A},\widehat{b})+\frac{2{{\widehat{\beta }}^{2}}}{\widehat{A}V}Cov(\widehat{A},\widehat{\phi }) +\frac{2{{\widehat{\beta }}^{2}}}{UV}Cov(\widehat{\phi },\widehat{b})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{L}} \right)}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{U}} \right)}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (R)=\ &amp;amp; -{{\left( \frac{\widehat{T}}{\widehat{A}}{{e}^{-\left( \tfrac{\widehat{\phi }}{V}+\tfrac{\widehat{b}}{U} \right)}} \right)}^{\widehat{\beta }}} \\ &lt;br /&gt;
 \ln (-\ln (R))=\ &amp;amp; \widehat{\beta }\left( \ln \widehat{T}-\ln \widehat{A}-\frac{\widehat{\phi }}{V}-\frac{\widehat{b}}{U} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\frac{1}{\widehat{\beta }}\ln (-\ln (R))+\ln \widehat{A}+\frac{\widehat{\phi }}{V}+\frac{\widehat{b}}{U}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\widehat{u}=\ln \widehat{T}.\,\!&amp;lt;/math&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial A} \right)}^{2}}Var(\widehat{A}) +{{\left( \frac{\partial \widehat{u}}{\partial b} \right)}^{2}}Var(\widehat{b})+{{\left( \frac{\partial \widehat{u}}{\partial \phi } \right)}^{2}}Var(\widehat{\phi }) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial A} \right)Cov(\widehat{\beta },\widehat{A}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial b} \right)Cov(\widehat{\beta },\widehat{b}) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial \phi } \right)Cov(\widehat{\beta },\widehat{\phi }) +2\left( \frac{\partial \widehat{u}}{\partial A} \right)\left( \frac{\partial \widehat{u}}{\partial b} \right)Cov(\widehat{A},\widehat{b}) +2\left( \frac{\partial \widehat{u}}{\partial A} \right)\left( \frac{\partial \widehat{u}}{\partial \phi } \right)Cov(\widehat{A},\widehat{\phi }) +2\left( \frac{\partial \widehat{u}}{\partial b} \right)\left( \frac{\partial \widehat{u}}{\partial \phi } \right)Cov(\widehat{b},\widehat{\phi })  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; \frac{1}{{{\widehat{\beta }}^{4}}}{{\left[ \ln (-\ln (R)) \right]}^{2}}Var(\widehat{\beta })+\frac{1}{{{\widehat{A}}^{2}}}Var(\widehat{A}) +\frac{1}{{{U}^{2}}}Var(\widehat{b})+\frac{1}{{{V}^{2}}}Var(\widehat{\phi }) +\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}\widehat{A}}Cov(\widehat{\beta },\widehat{A}) -\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}U}Cov(\widehat{\beta },\widehat{b}) \\ &lt;br /&gt;
 &amp;amp; -\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}V}Cov(\widehat{\beta },\widehat{\phi }) +\frac{2}{\widehat{A}U}Cov(\widehat{A},\widehat{b}) +\frac{2}{\widehat{A}V}Cov(\widehat{A},\widehat{\phi }) +\frac{2}{VU}Cov(\widehat{b},\widehat{\phi })  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on time are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{{{u}_{U}}}} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{{{u}_{L}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the T-H Lognormal==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
Since the standard deviation, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\widehat{A}\,\!&amp;lt;/math&amp;gt; are positive parameters, &amp;lt;math&amp;gt;\ln ({{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\ln (\widehat{A})\,\!&amp;lt;/math&amp;gt; are treated as normally distributed and the bounds are estimated from: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{\sigma }_{U}}=\ &amp;amp; {{\widehat{\sigma }}_{{{T}&#039;}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}&amp;amp;\text{ (Upper bound)} \\ &lt;br /&gt;
 {{\sigma }_{L}}=\ &amp;amp; \frac{{{\widehat{\sigma }}_{{{T}&#039;}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}}&amp;amp;\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{A}_{U}}=\ &amp;amp; \widehat{A}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{A})}}{\widehat{A}}}}&amp;amp;\text{ (Upper bound)} \\ &lt;br /&gt;
 {{A}_{L}}=\ &amp;amp; \frac{\widehat{A}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{A})}}{\widehat{A}}}}}&amp;amp;\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The lower and upper bounds on &amp;lt;math&amp;gt;\phi \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\phi }_{U}}= &amp;amp; \widehat{\phi }+{{K}_{\alpha }}\sqrt{Var(\widehat{\phi })}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{\phi }_{L}}= &amp;amp; \widehat{\phi }-{{K}_{\alpha }}\sqrt{Var(\widehat{\phi })}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{b}_{U}}= &amp;amp; \widehat{b}+{{K}_{\alpha }}\sqrt{Var(\widehat{b})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{b}_{L}}= &amp;amp; \widehat{b}-{{K}_{\alpha }}\sqrt{Var(\widehat{b})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\phi ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;b,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{A}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{\phi },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{b}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}}),\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left( \begin{matrix}&lt;br /&gt;
   Var\left( {{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{\phi },{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{A} \right) &amp;amp; Var\left( \widehat{A} \right) &amp;amp; Cov\left( \widehat{A},\widehat{\phi } \right) &amp;amp; Cov\left( \widehat{A},\widehat{b} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{\phi } \right) &amp;amp; Cov\left( \widehat{\phi },\widehat{A} \right) &amp;amp; Var\left( \widehat{\phi } \right) &amp;amp; Cov\left( \widehat{\phi },\widehat{b} \right)  \\&lt;br /&gt;
   Cov\left( \widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{b},\widehat{A} \right) &amp;amp; Cov\left( \widehat{b},\widehat{\phi } \right) &amp;amp; Var\left( \widehat{b} \right)  \\&lt;br /&gt;
\end{matrix} \right)={{F}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}^{-1}}={{\left( \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma _{{{T}&#039;}}^{2}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial \phi } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial b}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial \phi } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial b}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\phi }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \phi \partial b}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial b\partial \phi } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{b}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right)}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bounds on Reliability===&lt;br /&gt;
The reliability of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({T}&#039;,V,U;A,\phi ,b,{{\sigma }_{{{T}&#039;}}})=\int_{{{T}&#039;}}^{\infty }\frac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\ln (\widehat{A})-\tfrac{\widehat{\phi }}{V}-\tfrac{\widehat{b}}{U}}{{{\widehat{\sigma }}_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\widehat{z}(t,V,U;A,\phi ,b,{{\sigma }_{T}})=\tfrac{t-\ln (\widehat{A})-\tfrac{\widehat{\phi }}{V}-\tfrac{\widehat{b}}{U}}{{{\widehat{\sigma }}_{{{T}&#039;}}}},\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;\tfrac{d\widehat{z}}{dt}=\tfrac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
For &amp;lt;math&amp;gt;t={T}&#039;\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{z}=\tfrac{{T}&#039;-\ln (\widehat{A})-\tfrac{\widehat{\phi }}{V}-\tfrac{\widehat{b}}{U}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}\,\!&amp;lt;/math&amp;gt;, and for &amp;lt;math&amp;gt;t=\infty ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{z}=\infty .\,\!&amp;lt;/math&amp;gt; The above equation then becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(\widehat{z})=\int_{\widehat{z}({T}&#039;,V,U)}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The bounds on &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{z}_{U}}= &amp;amp; \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ &lt;br /&gt;
 &amp;amp; {{z}_{L}}= &amp;amp; \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{z})=\ &amp;amp; \left( \frac{\partial \widehat{z}}{\partial A} \right)_{\widehat{A}}^{2}Var(\widehat{A})+\left( \frac{\partial \widehat{z}}{\partial \phi } \right)_{\widehat{\phi }}^{2}Var(\widehat{\phi }) +\left( \frac{\partial \widehat{z}}{\partial b} \right)_{\widehat{b}}^{2}Var(\widehat{b})+\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)_{{{\widehat{\sigma }}_{{{T}&#039;}}}}^{2}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2{{\left( \frac{\partial \widehat{z}}{\partial A} \right)}_{\widehat{A}}}{{\left( \frac{\partial \widehat{z}}{\partial \phi } \right)}_{\widehat{\phi }}}Cov\left( \widehat{A},\widehat{\phi } \right) +2{{\left( \frac{\partial \widehat{z}}{\partial A} \right)}_{\widehat{A}}}{{\left( \frac{\partial \widehat{z}}{\partial b} \right)}_{\widehat{b}}}Cov\left( \widehat{A},\widehat{b} \right) \\ &lt;br /&gt;
 &amp;amp;  +2{{\left( \frac{\partial \widehat{z}}{\partial \phi } \right)}_{\widehat{\phi }}}{{\left( \frac{\partial \widehat{z}}{\partial b} \right)}_{\widehat{b}}}Cov\left( \widehat{\phi },\widehat{b} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial A} \right)}_{\widehat{A}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial \phi } \right)}_{\widehat{\phi }}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{\phi },{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial b} \right)}_{\widehat{b}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{z})=\ &amp;amp; \frac{1}{\widehat{\sigma }_{{{T}&#039;}}^{2}}[\frac{1}{{{A}^{2}}}Var(\widehat{A})+\frac{1}{{{V}^{2}}}Var(\widehat{\phi })+\frac{1}{{{U}^{2}}}Var(\widehat{b})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2}{A\cdot V}Cov\left( \widehat{A},\widehat{\phi } \right)+\frac{2}{A\cdot U}Cov\left( \widehat{A},\widehat{b} \right) \\ &lt;br /&gt;
 &amp;amp; +\frac{2}{V\cdot U}Cov\left( \widehat{\phi },\widehat{b} \right)+\frac{2\widehat{z}}{A}Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +\frac{2\widehat{z}}{V}Cov\left( \widehat{\phi },{{\widehat{\sigma }}_{{{T}&#039;}}} \right)+\frac{2\widehat{z}}{U}Cov\left( \widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds around time, for a given lognormal percentile (unreliability), are estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{T}&#039;(V,U;\widehat{A},\widehat{\phi },\widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}})=\ln (\widehat{A})+\frac{\widehat{\phi }}{V}+\frac{\widehat{b}}{U}+z\cdot {{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {T}&#039;(V,U;\widehat{A},\widehat{\phi },\widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}})=\ &amp;amp; \ln (T) \\ &lt;br /&gt;
 z=\ &amp;amp; {{\Phi }^{-1}}\left[ F({T}&#039;) \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;)}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to calculate the variance of &amp;lt;math&amp;gt;{T}&#039;(V,U;\widehat{A},\widehat{\phi },\widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var({T}&#039;)=\ &amp;amp; {{\left( \frac{\partial {T}&#039;}{\partial A} \right)}^{2}}Var(\widehat{A})+{{\left( \frac{\partial {T}&#039;}{\partial \phi } \right)}^{2}}Var(\widehat{\phi }) +{{\left( \frac{\partial {T}&#039;}{\partial b} \right)}^{2}}Var(\widehat{b})+{{\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2\left( \frac{\partial {T}&#039;}{\partial A} \right)\left( \frac{\partial {T}&#039;}{\partial \phi } \right)Cov\left( \widehat{A},\widehat{\phi } \right) \\ &lt;br /&gt;
 &amp;amp;  +2\left( \frac{\partial {T}&#039;}{\partial A} \right)\left( \frac{\partial {T}&#039;}{\partial b} \right)Cov\left( \widehat{A},\widehat{b} \right) +2\left( \frac{\partial {T}&#039;}{\partial \phi } \right)\left( \frac{\partial {T}&#039;}{\partial b} \right)Cov\left( \widehat{\phi },\widehat{b} \right) +2\left( \frac{\partial {T}&#039;}{\partial A} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) \\ &lt;br /&gt;
 &amp;amp;  +2\left( \frac{\partial {T}&#039;}{\partial \phi } \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{\phi },{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2\left( \frac{\partial {T}&#039;}{\partial b} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var({T}&#039;)=\ &amp;amp; \frac{1}{{{A}^{2}}}Var(\widehat{A})+\frac{1}{{{V}^{2}}}Var(\widehat{\phi }) +\frac{1}{{{U}^{2}}}Var(\widehat{b})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2}{A\cdot V}Cov\left( \widehat{A},\widehat{\phi } \right)+\frac{2}{A\cdot U}Cov\left( \widehat{A},\widehat{b} \right) \\ &lt;br /&gt;
 &amp;amp; +\frac{2}{V\cdot U}Cov\left( \widehat{\phi },\widehat{b} \right)+\frac{2\widehat{z}}{A}Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +\frac{2\widehat{z}}{V}Cov\left( \widehat{\phi },{{\widehat{\sigma }}_{{{T}&#039;}}} \right)+\frac{2\widehat{z}}{U}Cov\left( \widehat{b},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; T_{U}^{\prime }= &amp;amp; \ln {{T}_{U}}={T}&#039;+{{K}_{\alpha }}\sqrt{Var({T}&#039;)} \\ &lt;br /&gt;
 &amp;amp; T_{L}^{\prime }= &amp;amp; \ln {{T}_{L}}={T}&#039;-{{K}_{\alpha }}\sqrt{Var({T}&#039;)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for &amp;lt;math&amp;gt;{{T}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{T}_{L}}\,\!&amp;lt;/math&amp;gt; yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{T_{U}^{\prime }}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{T_{L}^{\prime }}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Inverse_Power_Law_Relationship&amp;diff=64930</id>
		<title>Inverse Power Law Relationship</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Inverse_Power_Law_Relationship&amp;diff=64930"/>
		<updated>2017-02-08T21:10:25Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional Reliability Function */ changed R(T,t,V) to R((t|T),V)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|6}}&lt;br /&gt;
&lt;br /&gt;
The inverse power law (IPL) model (or relationship) is commonly used for non-thermal accelerated stresses and is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=\frac{1}{K{{V}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; represents a quantifiable life measure, such as mean life, characteristic life, median life, &amp;lt;math&amp;gt;B(x)\,\!&amp;lt;/math&amp;gt; life, etc.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; represents the stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; is one of the model parameters to be determined, &amp;lt;math&amp;gt;(K&amp;gt;0).\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is another model parameter to be determined.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA8.1.png|center|400px|The inverse power law relationship on linear scales at different life characteristics and with a Weibull life distribution.]]&lt;br /&gt;
&lt;br /&gt;
The inverse power law appears as a straight line when plotted on a log-log paper. The equation of the line is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\ln (L)=-\ln (K)-n\ln (V)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Plotting methods are widely used in estimating the parameters of the inverse power law relationship since obtaining &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is as simple as finding the slope and the intercept in the above equation.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA8.2.png|center|450px|Graphical look at the IPL relationship (log-log scale)]]&lt;br /&gt;
&lt;br /&gt;
===A Look at the Parameter &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
The parameter &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; in the inverse power relationship is a measure of the effect of the stress on the life. As the absolute value of &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; increases, the greater the effect of the stress. Negative values of &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; indicate an increasing life with increasing stress. An absolute value of &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; approaching zero indicates small effect of the stress on the life, with no effect (constant life with stress) when &amp;lt;math&amp;gt;n=0.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA8.3.png|center|400px|Life vs. Stress for different values of n.]]&lt;br /&gt;
&lt;br /&gt;
===Acceleration Factor===&lt;br /&gt;
For the IPL relationship the acceleration factor is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{F}}=\frac{{{L}_{USE}}}{{{L}_{Accelerated}}}=\frac{\tfrac{1}{KV_{u}^{n}}}{\tfrac{1}{KV_{A}^{n}}}={{\left( \frac{{{V}_{A}}}{{{V}_{u}}} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{L}_{USE}}\,\!&amp;lt;/math&amp;gt; is the life at use stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{L}_{Accelerated}}\,\!&amp;lt;/math&amp;gt; is the life at the accelerated stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{u}}\,\!&amp;lt;/math&amp;gt; is the use stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{A}}\,\!&amp;lt;/math&amp;gt; is the accelerated stress level.&lt;br /&gt;
&lt;br /&gt;
=IPL-Exponential=&lt;br /&gt;
The IPL-exponential model can be derived by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt; in the exponential &#039;&#039;pdf&#039;&#039;, yielding the following IPL-exponential &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=K{{V}^{n}}{{e}^{-K{{V}^{n}}t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that this is a 2-parameter model. The failure rate (the parameter of the exponential distribution) of the model is simply &amp;lt;math&amp;gt;\lambda =K{{V}^{n}},\,\!&amp;lt;/math&amp;gt; and is only a function of stress.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA8.4.png|center|450px|IPL-exponential failure rate function at different stress levels.]]&lt;br /&gt;
&lt;br /&gt;
==IPL-Exponential Statistical Properties Summary==&lt;br /&gt;
===Mean or MTTF===&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T},\,\!&amp;lt;/math&amp;gt; or Mean Time To Failure (MTTF) for the IPL-exponential relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \overline{T}= &amp;amp; \int_{0}^{\infty }t\cdot f(t,V)dt=\int_{0}^{\infty }t\cdot K{{V}^{n}}{{e}^{-K{{V}^{n}}t}}dt =\  \frac{1}{K{{V}^{n}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the MTTF is a function of stress only and is simply equal to the IPL relationship (which is the original assumption), when using the exponential distribution.&lt;br /&gt;
&lt;br /&gt;
===Median===&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; for the IPL-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=0.693\frac{1}{K{{V}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode===&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; for the IPL-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Standard Deviation===&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, for the IPL-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=\frac{1}{K{{V}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===IPL-Exponential Reliability Function===&lt;br /&gt;
The IPL-exponential reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)={{e}^{-TK{{V}^{n}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function is the complement of the IPL-exponential cumulative distribution function:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=1-Q(T,V)=1-\int_{0}^{T}f(T,V)dT\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=1-\int_{0}^{T}K{{V}^{n}}{{e}^{-K{{V}^{n}}T}}dT={{e}^{-K{{V}^{n}}T}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability===&lt;br /&gt;
The conditional reliability function for the IPL-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-\lambda (T+t)}}}{{{e}^{-\lambda T}}}={{e}^{-K{{V}^{n}}t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
For the IPL-exponential model, the reliable life or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({{t}_{R}},V)={{e}^{-K{{V}^{n}}{{t}_{R}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\ln [R({{t}_{R}},V)]=-K{{V}^{n}}{{t}_{R}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=-\frac{1}{K{{V}^{n}}}\ln [R({{t}_{R}},V)]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Parameter Estimation===&lt;br /&gt;
Substituting the inverse power law relationship into the exponential log-likelihood equation yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ KV_{i}^{n}{{e}^{-KV_{i}^{n}{{T}_{i}}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }KV_{i}^{n}T_{i}^{\prime }+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-T_{Li}^{\prime \prime }KV_{i}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-T_{Ri}^{\prime \prime }KV_{i}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; is the IPL parameter (unknown, the first of two parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the second IPL parameter (unknown, the second of two parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;\widehat{K},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial K}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt;, where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \frac{\partial \Lambda }{\partial K}=\ &amp;amp; \frac{1}{K}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}-\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}V_{i}^{n}{{T}_{i}}-\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }V_{i}^{n}T_{i}^{\prime } \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\left( T_{Li}^{\prime \prime }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime }R_{Ri}^{\prime \prime } \right)V_{i}^{n}}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \frac{\partial \Lambda }{\partial n}=\ &amp;amp; \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln ({{V}_{i}})-K\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}V_{i}^{n}\ln ({{V}_{i}}){{T}_{i}} -K\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }V_{i}^{n}\ln ({{V}_{i}})T_{i}^{\prime } \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{KV_{i}^{n}\ln ({{V}_{i}})\left( T_{Li}{\prime \prime }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime }R_{Ri}^{\prime \prime } \right)}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=IPL-Weibull=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Inverse Power Law Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
The IPL-Weibull model can be derived by setting &amp;lt;math&amp;gt;\eta =L(V)\,\!&amp;lt;/math&amp;gt; in the Weibull &#039;&#039;pdf&#039;&#039;, yielding the following IPL-Weibull &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=\beta K{{V}^{n}}{{\left( K{{V}^{n}}t \right)}^{\beta -1}}{{e}^{-{{\left( K{{V}^{n}}t \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is a three parameter model. Therefore it is more flexible but it also requires more laborious techniques for parameter estimation. The IPL-Weibull model yields the IPL-exponential model for &amp;lt;math&amp;gt;\beta =1.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
==IPL-Weibull Statistical Properties Summary==&lt;br /&gt;
===Mean or MTTF===&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T}\,\!&amp;lt;/math&amp;gt; (also called &amp;lt;math&amp;gt;MTTF\,\!&amp;lt;/math&amp;gt; ), of the IPL-Weibull model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=\frac{1}{K{{V}^{n}}}\cdot \Gamma \left( \frac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Gamma \left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt; is the gamma function evaluated at the value of &amp;lt;math&amp;gt;\left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Median===&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; of the IPL-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=\frac{1}{K{{V}^{n}}}{{\left( \ln 2 \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode===&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; of the IPL-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=\frac{1}{K{{V}^{n}}}{{\left( 1-\frac{1}{\beta } \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Standard Deviation===&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}},\,\!&amp;lt;/math&amp;gt; of the IPL-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=\frac{1}{K{{V}^{n}}}\cdot \sqrt{\Gamma \left( \frac{2}{\beta }+1 \right)-{{\left( \Gamma \left( \frac{1}{\beta }+1 \right) \right)}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===IPL-Weibull Reliability Function===&lt;br /&gt;
The IPL-Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)={{e}^{-{{\left( K{{V}^{n}}T \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability Function===&lt;br /&gt;
&lt;br /&gt;
The IPL-Weibull conditional reliability function at a specified stress level is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-{{\left[ K{{V}^{n}}\left( T+t \right) \right]}^{\beta }}}}}{{{e}^{-{{\left( K{{V}^{n}}T \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V)={{e}^{-\left[ {{\left( K{{V}^{n}}\left( T+t \right) \right)}^{\beta }}-{{\left( K{{V}^{n}}T \right)}^{\beta }} \right]}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
&lt;br /&gt;
For the IPL-Weibull model, the reliable life, &amp;lt;math&amp;gt;{T}_{R}\,\!&amp;lt;/math&amp;gt;, of a unit for a specified reliability and starting the mission at age zero is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{T}_{R}}=\frac{1}{K{{V}^{n}}}{{\left\{ -\ln \left[ R\left( {{T}_{R}},V \right) \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===IPL-Weibull Failure Rate Function===&lt;br /&gt;
&lt;br /&gt;
The IPL-Weibull failure rate function, &amp;lt;math&amp;gt;\lambda (T)\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda \left( T,V \right)=\frac{f\left( T,V \right)}{R\left( T,V \right)}=\beta K{{V}^{n}}{{\left( K{{V}^{n}}T \right)}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
Substituting the inverse power law relationship into the Weibull log-likelihood function yields: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \Lambda = \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \beta KV_{i}^{n}{{\left( KV_{i}^{n}{{T}_{i}} \right)}^{\beta -1}}{{e}^{-{{\left( KV_{i}^{n}{{T}_{i}} \right)}^{\beta }}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( KV_{i}^{n}T_{i}^{\prime } \right)}^{\beta }} +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-{{\left( KV_{i}^{n}T_{Li}^{\prime \prime } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-{{\left( KV_{i}^{n}T_{Ri}^{\prime \prime } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{K}\,\!&amp;lt;/math&amp;gt; is the IPL parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{n}\,\!&amp;lt;/math&amp;gt; is the second IPL parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;K,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \beta }=0\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial K}=0\,\!&amp;lt;/math&amp;gt; and   &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt;, where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}\frac{\partial \Lambda }{\partial \beta }=\ &amp;amp; \frac{1}{\beta }\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}+\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\ln \left( KV_{i}^{n}{{T}_{i}} \right) -\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}{{\left( KV_{i}^{n}{{T}_{i}} \right)}^{\beta }}\ln \left( KV_{i}^{n}{{T}_{i}} \right) -\underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }{{\left( \,KV_{i}^{n}T_{i}^{\prime } \right)}^{\beta }}\ln \left( KV_{i}^{n}T_{i}^{\prime } \right) \\   &amp;amp; \overset{FI}{\mathop{\underset{i=1} {\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{{{\left( KV_{i}^{n} \right)}^{\beta }}\left[ R_{Li}^{\prime \prime }T_{Li}^{\prime \prime \beta }\left( \ln (KV_{i}^{n}T_{Li}^{\prime \prime }) \right)-R_{Ri}^{\prime \prime }T_{Ri}^{\prime \prime \beta }\left( \ln (KV_{i}^{n}T_{Ri}^{\prime \prime }) \right) \right]}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }} \\ &lt;br /&gt;
\frac{\partial \Lambda }{\partial K}=\ &amp;amp; \frac{\beta }{K}\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}-\frac{\beta }{K}\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}{{\left( KV_{i}^{n}{{T}_{i}} \right)}^{\beta }} -\frac{\beta }{K}\underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }{{\left( KV_{i}^{n}T_{i}^{\prime } \right)}^{\beta }} \overset{{}}{\mathop{-\beta \underset{i=1}{\mathop{\overset{FI}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,}}\,N_{i}^{\prime \prime }\frac{{{K}^{\beta -1}}V_{i}^{n\beta }\left[ T_{Li}^{\prime \prime \beta }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime \beta }R_{Ri}^{\prime \prime } \right]}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  \\&lt;br /&gt;
\frac{\partial\Lambda }{\partial n}=\ &amp;amp; \beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\ln ({{V}_{i}}) -\beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\ln ({{V}_{i}}){{\left( KV_{i}^{n}{{T}_{i}} \right)}^{\beta }} -\beta \underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }\ln ({{V}_{i}}){{\left( KV_{i}^{n}{{T}_{i}} \right)}^{\beta }} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{n{{K}^{\beta }}V_{i}^{\beta (n-1)}\left[ T_{Li}^{\prime \prime \beta }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime \beta }R_{Ri}^{\prime \prime } \right]}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===IPL-Weibull Example===&lt;br /&gt;
{{:Inverse_Power_Law_Example}}&lt;br /&gt;
&lt;br /&gt;
=IPL-Lognormal=&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; for the Inverse Power Law relationship and the lognormal distribution is given next.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\overline{{{T}&#039;}}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
T&#039;=ln(T)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; = times-to-failure.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\overline{T}&#039;\,\!&amp;lt;/math&amp;gt; = mean of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\sigma_{T&#039;}\,\!&amp;lt;/math&amp;gt; = standard deviation of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
The median of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=e^{\overline{T}&#039;}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The IPL-lognormal model &#039;&#039;pdf&#039;&#039; can be obtained first by setting &amp;lt;math&amp;gt;\breve{T}=L(V)\,\!&amp;lt;/math&amp;gt; in the lognormal &#039;&#039;pdf&#039;&#039;. Therefore:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; \breve{T}=L(V)=\frac{1}{K \cdot V^n}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;e^{\overline{T&#039;}}=\frac{1}{K \cdot V^n}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}&#039;=-ln(K)-n ln(V) \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So the IPL-lognormal model &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T,V)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;+ln(K)+n ln(V)}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==IPL-Lognormal Statistical Properties Summary==&lt;br /&gt;
===The Mean===&lt;br /&gt;
The mean life of the IPL-lognormal model (mean of the times-to-failure), &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\bar{T}=\ {{e}^{\bar{{T}&#039;}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}}= {{e}^{{-ln(K)-nln(V)}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The mean of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\bar{T}}^{^{\prime }}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{T}}^{\prime }}=\ln \left( {\bar{T}} \right)-\frac{1}{2}\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Standard Deviation===&lt;br /&gt;
The standard deviation of the IPL-lognormal model (standard deviation of the times-to-failure), &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{\sigma }_{T}}= &amp;amp; \sqrt{\left( {{e}^{2\bar{{T}&#039;}+\sigma _{{{T}&#039;}}^{2}}} \right)\,\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)} = \sqrt{\left( {{e}^{2\left( -\ln (K)-n\ln (V) \right)+\sigma _{{{T}&#039;}}^{2}}} \right)\,\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The standard deviation of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\sqrt{\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Mode===&lt;br /&gt;
The mode of the IPL-lognormal model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}={{e}^{{\bar{T}}&#039;-\sigma _{{{T}&#039;}}^{2}}}={{e}^{-\ln (K)-n\ln (V)-\sigma _{{{T}&#039;}}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===IPL-Lognormal Reliability===&lt;br /&gt;
The reliability for a mission of time T, starting at age 0, for the IPL-lognormal model is determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,\,V)=\int_{T}^{\infty }f(t,\,V)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,\,V)=\int_{{{T}^{^{\prime }}}}^{\infty }\frac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t+\ln (K)+n\ln (V)}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
The reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;T_{R}^{\prime }=-\ln (K)-n\ln (V)+z\cdot {{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z={{\Phi }^{-1}}\left[ F\left( T_{R}^{\prime },\,V \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;,\,V)}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;{T}&#039;=\ln (T)\,\!&amp;lt;/math&amp;gt; the reliable life, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}={{e}^{T_{R}^{\prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Lognormal Failure Rate===&lt;br /&gt;
The lognormal failure rate is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (T,\,V)=\frac{f(T,\,V)}{R(T,\,V)}=\frac{\tfrac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;+\ln (K)+n\ln (V)}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}}{\int_{{{T}&#039;}}^{\infty }\tfrac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;+\ln (K)+n\ln (V)}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
The complete IPL-lognormal log-likelihood function is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{{{\sigma }_{{{T}&#039;}}}{{T}_{i}}}\varphi \left( \frac{\ln \left( {{T}_{i}} \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right) \right] \text{ }+\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\ln \left[ 1-\Phi \left( \frac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right) \right] +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Li}^{\prime \prime }=\frac{\ln T_{Li}^{\prime \prime }+\ln K+n\ln {{V}_{i}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Ri}^{\prime \prime }=\frac{\ln T_{Ri}^{\prime \prime }+\ln K+n\ln {{V}_{i}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
* &amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{i}^{th}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;s_{T&#039;}\,\!&amp;lt;/math&amp;gt; is the standard deviation of the natural logarithm of the times-to-failure (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; is the IPL parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the second IPL parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;Vi\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{i}^{th}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;Ti\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{i}^{th}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;N&#039;_i\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{i}^{th}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;T^{&#039;}_{i}\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{i}^{th}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt; {{\hat {\sigma}}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\hat {K}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\hat {n}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}=0,\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial K}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\ \ :\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   \frac{\partial \Lambda }{\partial K}= &amp;amp; -\frac{1}{K\cdot \sigma _{{{T}&#039;}}^{2}}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}(\ln ({{T}_{i}})+\ln (K)+n\ln ({{V}_{i}})) \ -\frac{1}{K\cdot {{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{\varphi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{+\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\phi (z_{Ri}^{\prime \prime })-\phi (z_{Li}^{\prime \prime })}{K\sigma _{T}^{\prime }(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))} \\ &lt;br /&gt;
  \frac{\partial \Lambda }{\partial n}= &amp;amp; -\frac{1}{\sigma _{{{T}&#039;}}^{2}}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln ({{V}_{i}})\left[ \ln ({{T}_{i}})+\ln (K)+n\ln ({{V}_{i}}) \right] -\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\ln ({{V}_{i}})\frac{\varphi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)} +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\ln {{V}_{i}}\left( \phi (z_{Ri}^{\prime \prime })-\phi (z_{Li}^{\prime \prime }) \right)}{\sigma _{T}^{\prime }(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))} \\ &lt;br /&gt;
  \frac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}= &amp;amp; \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left( \frac{{{\left( \ln ({{T}_{i}})+\ln (K)+n\ln ({{V}_{i}}) \right)}^{2}}}{\sigma _{{{T}&#039;}}^{3}}-\frac{1}{{{\sigma }_{{{T}&#039;}}}} \right) \ +\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{\left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)\,\varphi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{z_{Ri}^{\prime \prime }\phi (z_{Ri}^{\prime \prime })-z_{Li}^{\prime \prime }\phi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\varphi \left( x \right)=\frac{1}{\sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( x \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=IPL and the Coffin-Manson Relationship=&lt;br /&gt;
In accelerated life testing analysis, thermal cycling is commonly treated as a low-cycle fatigue problem, using the inverse power law relationship. Coffin and Manson suggested that the number of cycles-to-failure of a metal subjected to thermal cycling is given by Nelson [[Appendix_E:_References|[28]]]:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;N=\frac{C}{{{\left( \Delta T \right)}^{\gamma }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of cycles to failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is a constant, characteristic of the metal.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt; is another constant, also characteristic of the metal.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\Delta T\,\!&amp;lt;/math&amp;gt; is the range of the thermal cycle.&lt;br /&gt;
&lt;br /&gt;
This relationship is basically the inverse power law relationship, where the stress &amp;lt;math&amp;gt;V,\,\!&amp;lt;/math&amp;gt; is substituted by the range &amp;lt;math&amp;gt;\Delta V\,\!&amp;lt;/math&amp;gt;. This is an attempt to simplify the analysis of a time-varying stress test by using a constant stress model.  It is a very commonly used methodology for thermal cycling and mechanical fatigue tests. However, by performing such a simplification, the following assumptions and shortcomings are inevitable. First, the acceleration effects due to the stress rate of change are ignored. In other words, it is assumed that the failures are accelerated by the stress difference and not by how rapidly this difference occurs. Secondly, the acceleration effects due to stress relaxation and creep are ignored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this example the use of the Coffin-Manson relationship will be illustrated. This is a very simple example which can be repeated at any time. The reader is encouraged to perform this test.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Product:&#039;&#039;&#039;	         ACME Paper Clip Model 4456&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Reliability Target:&#039;&#039;&#039;	 99% at a 90% confidence after 30 cycles of 45º&lt;br /&gt;
&lt;br /&gt;
After consulting with our paper-clip engineers, the acceleration stress was determined to be the angle to which the clips are bent. Two bend stresses of 90º and 180º were used. A sample of six paper clips was tested to failure at both 90º and 180º bends with the following data obtained.&lt;br /&gt;
&lt;br /&gt;
[[Image:chp8degrees2cyclesTbl.png|center|300px|]]&lt;br /&gt;
&lt;br /&gt;
The test was performed as shown in the next figures (a side-view of the paper-clip is shown).&lt;br /&gt;
      &lt;br /&gt;
[[Image:90degrees.png|center|300px|]]&lt;br /&gt;
&lt;br /&gt;
Using the IPL-lognormal model, determine whether the reliability target was met.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
By using the IPL relationship to analyze the data, we are actually using a constant stress model to analyze a cycling process. Caution must be exercised when performing the test. The rate of change in the angle must be constant and equal for both the 90º and 180º bends and constant and equal to the rate of change in the angle for the use life of 45º  bend. Rate effects are influencing the life of the paper clip. By keeping the rate constant and equal at all stress levels, we can then eliminate these rate effects from our analysis. Otherwise the analysis will not be valid. &lt;br /&gt;
&lt;br /&gt;
The data were entered and analyzed using ReliaSoft&#039;s ALTA.&lt;br /&gt;
&lt;br /&gt;
[[Image:90-180time.png|center|675px|]]&lt;br /&gt;
 &lt;br /&gt;
The parameters of the IPL-lognormal model were estimated to be:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \sigma = &amp;amp; 0.198533 \\ &lt;br /&gt;
 &amp;amp; K= &amp;amp; 0.000012 \\ &lt;br /&gt;
 &amp;amp; n= &amp;amp; 1.856808  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using the QCP, the 90% lower 1-sided confidence bound on reliability after 30 cycles for a 45º bend was estimated to be &amp;lt;math&amp;gt;99.6%\,\!&amp;lt;/math&amp;gt;, as shown below.&lt;br /&gt;
&lt;br /&gt;
[[Image:stdprobqcp.png|center|500px|]]&lt;br /&gt;
&lt;br /&gt;
This meets the target reliability of 99%.&lt;br /&gt;
&lt;br /&gt;
=IPL Confidence Bounds=&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==Approximate Confidence Bounds on IPL-Exponential==&lt;br /&gt;
===Confidence Bounds on the Mean Life===&lt;br /&gt;
From the inverse power law relationship the mean life for the exponential distribution is given by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt;. The upper &amp;lt;math&amp;gt;({{m}_{U}})\,\!&amp;lt;/math&amp;gt; and lower &amp;lt;math&amp;gt;({{m}_{L}})\,\!&amp;lt;/math&amp;gt; bounds on the mean life (ML estimate of the mean life) are estimated by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{U}}=\widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{L}}=\widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level, then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds, and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds. The variance of &amp;lt;math&amp;gt;\widehat{m}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{m})= &amp;amp; {{\left( \frac{\partial m}{\partial K} \right)}^{2}}Var(\widehat{K})+{{\left( \frac{\partial m}{\partial n} \right)}^{2}}Var(\widehat{n}) +2\left( \frac{\partial m}{\partial K} \right)\left( \frac{\partial m}{\partial n} \right)Cov(\widehat{K},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var(\widehat{m})=\frac{1}{{{\widehat{K}}^{2}}{{V}^{2\widehat{n}}}}\left[ \frac{1}{{{\widehat{K}}^{2}}}Var(\widehat{K})+{{\left[ \ln (V) \right]}^{2}}Var(\widehat{n})+\frac{2\ln (V)}{\widehat{K}}Cov(\widehat{K},\widehat{n}) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariance of &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are estimated from the Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{K},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{K}) &amp;amp; Cov(\widehat{K},\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{n},\widehat{K}) &amp;amp; Var(\widehat{n})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{K}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial K\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial K} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{n}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The bounds on reliability at a given time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, are estimated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{U}}}}} \\ &lt;br /&gt;
 &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{L}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{T}=-\widehat{m}\cdot \ln (R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding confidence bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; -{{m}_{U}}\cdot \ln (R) \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; -{{m}_{L}}\cdot \ln (R)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds on IPL-Weibull==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
Using the same approach as previously discussed ( &amp;lt;math&amp;gt;\widehat{\beta }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{K}\,\!&amp;lt;/math&amp;gt; positive parameters): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\beta }_{U}}= &amp;amp; \widehat{\beta }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{\beta }_{L}}= &amp;amp; \widehat{\beta }\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{K}_{U}}= &amp;amp; \widehat{K}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{K})}}{\widehat{K}}}} \\ &lt;br /&gt;
 &amp;amp; {{K}_{L}}= &amp;amp; \widehat{K}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{K})}}{\widehat{K}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{n}_{U}}= &amp;amp; \widehat{n}+{{K}_{\alpha }}\sqrt{Var(\widehat{n})} \\ &lt;br /&gt;
 &amp;amp; {{n}_{L}}= &amp;amp; \widehat{n}-{{K}_{\alpha }}\sqrt{Var(\widehat{n})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;K,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{K},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{\beta }) &amp;amp; Cov(\widehat{\beta },\widehat{K}) &amp;amp; Cov(\widehat{\beta },\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{K},\widehat{\beta }) &amp;amp; Var(\widehat{K}) &amp;amp; Cov(\widehat{K},\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{n},\widehat{\beta }) &amp;amp; Cov(\widehat{n},\widehat{K}) &amp;amp; Var(\widehat{n})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\beta }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial K} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial K\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{K}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial K\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{n}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The reliability function (ML estimate) for the IPL-Weibull model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-{{\left( \widehat{K}{{V}^{\widehat{n}}}T \right)}^{\widehat{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-{{e}^{\ln \left[ {{\left( \widehat{K}{{V}^{\widehat{n}}}T \right)}^{\widehat{\beta }}} \right]}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Setting: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\ln \left[ \left( \widehat{K}V\widehat{^{n}}T \right)\widehat{^{\beta }} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\widehat{\beta }\left[ \ln (T)+\ln (\widehat{K})+\widehat{n}\ln (V) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-e\widehat{^{u}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to find the upper and lower bounds on &amp;lt;math&amp;gt;\widehat{u}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial K} \right)}^{2}}Var(\widehat{K}) +{{\left( \frac{\partial \widehat{u}}{\partial n} \right)}^{2}}Var(\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial K} \right)Cov(\widehat{\beta },\widehat{K})\\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{\beta },\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial K} \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{K},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{u})= &amp;amp; {{\left( \frac{\widehat{u}}{\widehat{\beta }} \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\widehat{\beta }}{\widehat{K}} \right)}^{2}}Var(\widehat{K}) +{{\widehat{\beta }}^{2}}{{\left[ \ln (V) \right]}^{2}}Var(\widehat{n}) +\frac{2\widehat{u}}{\widehat{K}}Cov(\widehat{\beta },\widehat{K})+2\widehat{u}\ln (V)Cov(\widehat{\beta },\widehat{n})+\frac{2{{\widehat{\beta }}^{2}}\ln (V)}{\widehat{K}}Cov(\widehat{K},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{L}} \right)}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{U}} \right)}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time for a given reliability (ML estimate of time) are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (R) &amp;amp;=\ -{{\left( \widehat{K}{{V}^{\widehat{n}}}\widehat{T} \right)}^{\widehat{\beta }}} \\ &lt;br /&gt;
  \ln (-\ln (R)) &amp;amp;=\  \widehat{\beta }\left[ \ln (\widehat{T})+\ln (\widehat{K})+\widehat{n}\ln (V) \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\frac{1}{\widehat{\beta }}\ln (-\ln (R))-\ln (\widehat{K})-\widehat{n}\ln (V)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\widehat{u}=\ln \widehat{T}.\,\!&amp;lt;/math&amp;gt; The upper and lower bounds on &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{u}_{U}}= &amp;amp; \widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})} \\ &lt;br /&gt;
 &amp;amp; {{u}_{L}}= &amp;amp; \widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial K} \right)}^{2}}Var(\widehat{K}) +{{\left( \frac{\partial \widehat{u}}{\partial n} \right)}^{2}}Var(\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial K} \right)Cov(\widehat{\beta },\widehat{K}) \\ &lt;br /&gt;
 &amp;amp;  +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{\beta },\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial K} \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{K},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; \frac{1}{{{\widehat{\beta }}^{4}}}{{\left[ \ln (-\ln (R)) \right]}^{2}}Var(\widehat{\beta })+\frac{1}{{{\widehat{K}}^{2}}}Var(\widehat{K}) +{{\left[ \ln (V) \right]}^{2}}Var(\widehat{n}) +\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}\widehat{K}}Cov(\widehat{\beta },\widehat{K}) \\ &lt;br /&gt;
 &amp;amp;  +\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}}\ln (V)Cov(\widehat{\beta },\widehat{n}) +\frac{2\ln (V)}{\widehat{K}}Cov(\widehat{K},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on time are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{{{u}_{U}}}} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{{{u}_{L}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds on IPL-Lognormal==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
Since the standard deviation, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{T}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\widehat{K}\,\!&amp;lt;/math&amp;gt; are positive parameters, then &amp;lt;math&amp;gt;\ln ({{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\ln (\widehat{K})\,\!&amp;lt;/math&amp;gt; are treated as normally distributed, and the bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{\sigma }_{U}}=\ &amp;amp; {{\widehat{\sigma }}_{{{T}&#039;}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}} &amp;amp;\text{ (Upper bound)} \\ &lt;br /&gt;
 {{\sigma }_{L}}=\ &amp;amp; \frac{{{\widehat{\sigma }}_{{{T}&#039;}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}} &amp;amp;\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{K}_{U}}=\ &amp;amp; \widehat{K}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{K})}}{\widehat{K}}}} &amp;amp;\text{ (Upper bound)} \\ &lt;br /&gt;
  {{K}_{L}}=\ &amp;amp; \frac{\widehat{K}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{K})}}{\widehat{K}}}}} &amp;amp;\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The lower and upper bounds on &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{n}_{U}}= &amp;amp; \widehat{n}+{{K}_{\alpha }}\sqrt{Var(\widehat{n})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{n}_{L}}= &amp;amp; \widehat{n}-{{K}_{\alpha }}\sqrt{Var(\widehat{n})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;A,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}}),\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var({{\widehat{\sigma }}_{{{T}&#039;}}}) &amp;amp; Cov(\widehat{K},{{\widehat{\sigma }}_{{{T}&#039;}}}) &amp;amp; Cov(\widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}})  \\&lt;br /&gt;
   Cov({{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{K}) &amp;amp; Var(\widehat{K}) &amp;amp; Cov(\widehat{K},\widehat{n})  \\&lt;br /&gt;
   Cov({{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{n}) &amp;amp; Cov(\widehat{n},\widehat{K}) &amp;amp; Var\left( \widehat{n} \right)  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ F \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F=\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma _{{{T}&#039;}}^{2}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial K} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial K\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{K}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial K\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial K} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{n}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bounds on Reliability===&lt;br /&gt;
The reliability of the lognormal distribution is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({T}&#039;,V;K,n,{{\sigma }_{{{T}&#039;}}})=\int_{{{T}&#039;}}^{\infty }\frac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t+\ln (\widehat{K})+\widehat{n}\ln (V)}{{{\widehat{\sigma }}_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\widehat{z}(t,V;K,n,{{\sigma }_{T}})=\tfrac{t+\ln (\widehat{K})+\widehat{n}\ln (V)}{{{\widehat{\sigma }}_{{{T}&#039;}}}},\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;\tfrac{d\widehat{z}}{dt}=\tfrac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;t={T}&#039;\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{z}=\tfrac{{T}&#039;+\ln (\widehat{K})+\widehat{n}\ln (V)}{{{\widehat{\sigma }}_{{{T}&#039;}}}}\,\!&amp;lt;/math&amp;gt;, and for &amp;lt;math&amp;gt;t=\infty ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{z}=\infty .\,\!&amp;lt;/math&amp;gt; The above equation then becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(\widehat{z})=\int_{\widehat{z}({T}&#039;,V)}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The bounds on &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{z}_{U}}= &amp;amp; \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ &lt;br /&gt;
 &amp;amp; {{z}_{L}}= &amp;amp; \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{z})= &amp;amp; \left( \frac{\partial \widehat{z}}{\partial K} \right)_{\widehat{K}}^{2}Var(\widehat{K})+\left( \frac{\partial \widehat{z}}{\partial n} \right)_{\widehat{n}}^{2}Var(\widehat{n})+\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)_{{{\widehat{\sigma }}_{{{T}&#039;}}}}^{2}Var({{\widehat{\sigma }}_{T}}) +2{{\left( \frac{\partial \widehat{z}}{\partial K} \right)}_{\widehat{K}}}{{\left( \frac{\partial \widehat{z}}{\partial n} \right)}_{\widehat{n}}}Cov\left( \widehat{K},\widehat{n} \right) \\ &lt;br /&gt;
 &amp;amp;  +2{{\left( \frac{\partial \widehat{z}}{\partial K} \right)}_{\widehat{K}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{K},{{\widehat{\sigma }}_{T}} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial n} \right)}_{\widehat{n}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{n},{{\widehat{\sigma }}_{T}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{z})= &amp;amp; \frac{1}{\widehat{\sigma }_{{{T}&#039;}}^{2}}[\frac{1}{{{K}^{2}}}Var(\widehat{K})+\ln {{(V)}^{2}}Var(\widehat{n})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2\ln (V)}{K}Cov\left( \widehat{K},\widehat{n} \right)-\frac{2\widehat{z}}{K}Cov\left( \widehat{K},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)-2\widehat{z}\ln (V)Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds around time, for a given lognormal percentile (unreliability), are estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{T}&#039;(V;\widehat{K},\widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}})=-\ln (\widehat{K})-\widehat{n}\ln (V)+z\cdot {{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {T}&#039;(V;\widehat{K},\widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}})=\  \ln (T) \\&lt;br /&gt;
 \\&lt;br /&gt;
 &amp;amp; z=\  {{\Phi }^{-1}}\left[ F({T}&#039;) \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;)}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to calculate the variance of &amp;lt;math&amp;gt;{T}&#039;(V;\widehat{K},\widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}}):\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var({T}&#039;)= &amp;amp; {{\left( \frac{\partial {T}&#039;}{\partial K} \right)}^{2}}Var(\widehat{K})+{{\left( \frac{\partial {T}&#039;}{\partial n} \right)}^{2}}Var(\widehat{n})+{{\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2\left( \frac{\partial {T}&#039;}{\partial K} \right)\left( \frac{\partial {T}&#039;}{\partial n} \right)Cov\left( \widehat{K},\widehat{n} \right) \\ &lt;br /&gt;
 &amp;amp;  +2\left( \frac{\partial {T}&#039;}{\partial K} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{K},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2\left( \frac{\partial {T}&#039;}{\partial n} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var({T}&#039;)= &amp;amp; \frac{1}{{{K}^{2}}}Var(\widehat{K})+\ln {{(V)}^{2}}Var(\widehat{n})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2\ln (V)}{K}Cov\left( \widehat{K},\widehat{n} \right) \\ &lt;br /&gt;
 &amp;amp;  -\frac{2\widehat{z}}{K}Cov\left( \widehat{K},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) -2\widehat{z}\ln (V)Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; T_{U}^{\prime }= &amp;amp; \ln {{T}_{U}}={T}&#039;+{{K}_{\alpha }}\sqrt{Var({T}&#039;)} \\ &lt;br /&gt;
 &amp;amp; T_{L}^{\prime }= &amp;amp; \ln {{T}_{L}}={T}&#039;-{{K}_{\alpha }}\sqrt{Var({T}&#039;)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for &amp;lt;math&amp;gt;{{T}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{T}_{L}}\,\!&amp;lt;/math&amp;gt; yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{T_{U}^{\prime }}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{T_{L}^{\prime }}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Inverse_Power_Law_Relationship&amp;diff=64929</id>
		<title>Inverse Power Law Relationship</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Inverse_Power_Law_Relationship&amp;diff=64929"/>
		<updated>2017-02-08T21:09:46Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional Reliability */ changed R(T,t,V) to R((t|T),V)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|6}}&lt;br /&gt;
&lt;br /&gt;
The inverse power law (IPL) model (or relationship) is commonly used for non-thermal accelerated stresses and is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=\frac{1}{K{{V}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; represents a quantifiable life measure, such as mean life, characteristic life, median life, &amp;lt;math&amp;gt;B(x)\,\!&amp;lt;/math&amp;gt; life, etc.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; represents the stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; is one of the model parameters to be determined, &amp;lt;math&amp;gt;(K&amp;gt;0).\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is another model parameter to be determined.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA8.1.png|center|400px|The inverse power law relationship on linear scales at different life characteristics and with a Weibull life distribution.]]&lt;br /&gt;
&lt;br /&gt;
The inverse power law appears as a straight line when plotted on a log-log paper. The equation of the line is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\ln (L)=-\ln (K)-n\ln (V)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Plotting methods are widely used in estimating the parameters of the inverse power law relationship since obtaining &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is as simple as finding the slope and the intercept in the above equation.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA8.2.png|center|450px|Graphical look at the IPL relationship (log-log scale)]]&lt;br /&gt;
&lt;br /&gt;
===A Look at the Parameter &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
The parameter &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; in the inverse power relationship is a measure of the effect of the stress on the life. As the absolute value of &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; increases, the greater the effect of the stress. Negative values of &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; indicate an increasing life with increasing stress. An absolute value of &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; approaching zero indicates small effect of the stress on the life, with no effect (constant life with stress) when &amp;lt;math&amp;gt;n=0.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA8.3.png|center|400px|Life vs. Stress for different values of n.]]&lt;br /&gt;
&lt;br /&gt;
===Acceleration Factor===&lt;br /&gt;
For the IPL relationship the acceleration factor is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{F}}=\frac{{{L}_{USE}}}{{{L}_{Accelerated}}}=\frac{\tfrac{1}{KV_{u}^{n}}}{\tfrac{1}{KV_{A}^{n}}}={{\left( \frac{{{V}_{A}}}{{{V}_{u}}} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{L}_{USE}}\,\!&amp;lt;/math&amp;gt; is the life at use stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{L}_{Accelerated}}\,\!&amp;lt;/math&amp;gt; is the life at the accelerated stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{u}}\,\!&amp;lt;/math&amp;gt; is the use stress level.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{A}}\,\!&amp;lt;/math&amp;gt; is the accelerated stress level.&lt;br /&gt;
&lt;br /&gt;
=IPL-Exponential=&lt;br /&gt;
The IPL-exponential model can be derived by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt; in the exponential &#039;&#039;pdf&#039;&#039;, yielding the following IPL-exponential &#039;&#039;pdf&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=K{{V}^{n}}{{e}^{-K{{V}^{n}}t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that this is a 2-parameter model. The failure rate (the parameter of the exponential distribution) of the model is simply &amp;lt;math&amp;gt;\lambda =K{{V}^{n}},\,\!&amp;lt;/math&amp;gt; and is only a function of stress.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA8.4.png|center|450px|IPL-exponential failure rate function at different stress levels.]]&lt;br /&gt;
&lt;br /&gt;
==IPL-Exponential Statistical Properties Summary==&lt;br /&gt;
===Mean or MTTF===&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T},\,\!&amp;lt;/math&amp;gt; or Mean Time To Failure (MTTF) for the IPL-exponential relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \overline{T}= &amp;amp; \int_{0}^{\infty }t\cdot f(t,V)dt=\int_{0}^{\infty }t\cdot K{{V}^{n}}{{e}^{-K{{V}^{n}}t}}dt =\  \frac{1}{K{{V}^{n}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the MTTF is a function of stress only and is simply equal to the IPL relationship (which is the original assumption), when using the exponential distribution.&lt;br /&gt;
&lt;br /&gt;
===Median===&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; for the IPL-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=0.693\frac{1}{K{{V}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode===&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; for the IPL-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Standard Deviation===&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, for the IPL-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=\frac{1}{K{{V}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===IPL-Exponential Reliability Function===&lt;br /&gt;
The IPL-exponential reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)={{e}^{-TK{{V}^{n}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function is the complement of the IPL-exponential cumulative distribution function:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=1-Q(T,V)=1-\int_{0}^{T}f(T,V)dT\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=1-\int_{0}^{T}K{{V}^{n}}{{e}^{-K{{V}^{n}}T}}dT={{e}^{-K{{V}^{n}}T}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability===&lt;br /&gt;
The conditional reliability function for the IPL-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-\lambda (T+t)}}}{{{e}^{-\lambda T}}}={{e}^{-K{{V}^{n}}t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
For the IPL-exponential model, the reliable life or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({{t}_{R}},V)={{e}^{-K{{V}^{n}}{{t}_{R}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\ln [R({{t}_{R}},V)]=-K{{V}^{n}}{{t}_{R}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=-\frac{1}{K{{V}^{n}}}\ln [R({{t}_{R}},V)]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Parameter Estimation===&lt;br /&gt;
Substituting the inverse power law relationship into the exponential log-likelihood equation yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ KV_{i}^{n}{{e}^{-KV_{i}^{n}{{T}_{i}}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }KV_{i}^{n}T_{i}^{\prime }+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-T_{Li}^{\prime \prime }KV_{i}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-T_{Ri}^{\prime \prime }KV_{i}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; is the IPL parameter (unknown, the first of two parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the second IPL parameter (unknown, the second of two parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;\widehat{K},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial K}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt;, where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \frac{\partial \Lambda }{\partial K}=\ &amp;amp; \frac{1}{K}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}-\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}V_{i}^{n}{{T}_{i}}-\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }V_{i}^{n}T_{i}^{\prime } \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\left( T_{Li}^{\prime \prime }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime }R_{Ri}^{\prime \prime } \right)V_{i}^{n}}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \frac{\partial \Lambda }{\partial n}=\ &amp;amp; \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln ({{V}_{i}})-K\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}V_{i}^{n}\ln ({{V}_{i}}){{T}_{i}} -K\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }V_{i}^{n}\ln ({{V}_{i}})T_{i}^{\prime } \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{KV_{i}^{n}\ln ({{V}_{i}})\left( T_{Li}{\prime \prime }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime }R_{Ri}^{\prime \prime } \right)}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=IPL-Weibull=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Inverse Power Law Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
The IPL-Weibull model can be derived by setting &amp;lt;math&amp;gt;\eta =L(V)\,\!&amp;lt;/math&amp;gt; in the Weibull &#039;&#039;pdf&#039;&#039;, yielding the following IPL-Weibull &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=\beta K{{V}^{n}}{{\left( K{{V}^{n}}t \right)}^{\beta -1}}{{e}^{-{{\left( K{{V}^{n}}t \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is a three parameter model. Therefore it is more flexible but it also requires more laborious techniques for parameter estimation. The IPL-Weibull model yields the IPL-exponential model for &amp;lt;math&amp;gt;\beta =1.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
==IPL-Weibull Statistical Properties Summary==&lt;br /&gt;
===Mean or MTTF===&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T}\,\!&amp;lt;/math&amp;gt; (also called &amp;lt;math&amp;gt;MTTF\,\!&amp;lt;/math&amp;gt; ), of the IPL-Weibull model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=\frac{1}{K{{V}^{n}}}\cdot \Gamma \left( \frac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Gamma \left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt; is the gamma function evaluated at the value of &amp;lt;math&amp;gt;\left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Median===&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; of the IPL-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=\frac{1}{K{{V}^{n}}}{{\left( \ln 2 \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode===&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; of the IPL-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=\frac{1}{K{{V}^{n}}}{{\left( 1-\frac{1}{\beta } \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Standard Deviation===&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}},\,\!&amp;lt;/math&amp;gt; of the IPL-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=\frac{1}{K{{V}^{n}}}\cdot \sqrt{\Gamma \left( \frac{2}{\beta }+1 \right)-{{\left( \Gamma \left( \frac{1}{\beta }+1 \right) \right)}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===IPL-Weibull Reliability Function===&lt;br /&gt;
The IPL-Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)={{e}^{-{{\left( K{{V}^{n}}T \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability Function===&lt;br /&gt;
&lt;br /&gt;
The IPL-Weibull conditional reliability function at a specified stress level is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,t,V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-{{\left[ K{{V}^{n}}\left( T+t \right) \right]}^{\beta }}}}}{{{e}^{-{{\left( K{{V}^{n}}T \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,t,V)={{e}^{-\left[ {{\left( K{{V}^{n}}\left( T+t \right) \right)}^{\beta }}-{{\left( K{{V}^{n}}T \right)}^{\beta }} \right]}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
&lt;br /&gt;
For the IPL-Weibull model, the reliable life, &amp;lt;math&amp;gt;{T}_{R}\,\!&amp;lt;/math&amp;gt;, of a unit for a specified reliability and starting the mission at age zero is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{T}_{R}}=\frac{1}{K{{V}^{n}}}{{\left\{ -\ln \left[ R\left( {{T}_{R}},V \right) \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===IPL-Weibull Failure Rate Function===&lt;br /&gt;
&lt;br /&gt;
The IPL-Weibull failure rate function, &amp;lt;math&amp;gt;\lambda (T)\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda \left( T,V \right)=\frac{f\left( T,V \right)}{R\left( T,V \right)}=\beta K{{V}^{n}}{{\left( K{{V}^{n}}T \right)}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
Substituting the inverse power law relationship into the Weibull log-likelihood function yields: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \Lambda = \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \beta KV_{i}^{n}{{\left( KV_{i}^{n}{{T}_{i}} \right)}^{\beta -1}}{{e}^{-{{\left( KV_{i}^{n}{{T}_{i}} \right)}^{\beta }}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( KV_{i}^{n}T_{i}^{\prime } \right)}^{\beta }} +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-{{\left( KV_{i}^{n}T_{Li}^{\prime \prime } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-{{\left( KV_{i}^{n}T_{Ri}^{\prime \prime } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{K}\,\!&amp;lt;/math&amp;gt; is the IPL parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{n}\,\!&amp;lt;/math&amp;gt; is the second IPL parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;K,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \beta }=0\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial K}=0\,\!&amp;lt;/math&amp;gt; and   &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt;, where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}\frac{\partial \Lambda }{\partial \beta }=\ &amp;amp; \frac{1}{\beta }\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}+\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\ln \left( KV_{i}^{n}{{T}_{i}} \right) -\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}{{\left( KV_{i}^{n}{{T}_{i}} \right)}^{\beta }}\ln \left( KV_{i}^{n}{{T}_{i}} \right) -\underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }{{\left( \,KV_{i}^{n}T_{i}^{\prime } \right)}^{\beta }}\ln \left( KV_{i}^{n}T_{i}^{\prime } \right) \\   &amp;amp; \overset{FI}{\mathop{\underset{i=1} {\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{{{\left( KV_{i}^{n} \right)}^{\beta }}\left[ R_{Li}^{\prime \prime }T_{Li}^{\prime \prime \beta }\left( \ln (KV_{i}^{n}T_{Li}^{\prime \prime }) \right)-R_{Ri}^{\prime \prime }T_{Ri}^{\prime \prime \beta }\left( \ln (KV_{i}^{n}T_{Ri}^{\prime \prime }) \right) \right]}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }} \\ &lt;br /&gt;
\frac{\partial \Lambda }{\partial K}=\ &amp;amp; \frac{\beta }{K}\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}-\frac{\beta }{K}\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}{{\left( KV_{i}^{n}{{T}_{i}} \right)}^{\beta }} -\frac{\beta }{K}\underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }{{\left( KV_{i}^{n}T_{i}^{\prime } \right)}^{\beta }} \overset{{}}{\mathop{-\beta \underset{i=1}{\mathop{\overset{FI}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,}}\,N_{i}^{\prime \prime }\frac{{{K}^{\beta -1}}V_{i}^{n\beta }\left[ T_{Li}^{\prime \prime \beta }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime \beta }R_{Ri}^{\prime \prime } \right]}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  \\&lt;br /&gt;
\frac{\partial\Lambda }{\partial n}=\ &amp;amp; \beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\ln ({{V}_{i}}) -\beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\ln ({{V}_{i}}){{\left( KV_{i}^{n}{{T}_{i}} \right)}^{\beta }} -\beta \underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }\ln ({{V}_{i}}){{\left( KV_{i}^{n}{{T}_{i}} \right)}^{\beta }} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{n{{K}^{\beta }}V_{i}^{\beta (n-1)}\left[ T_{Li}^{\prime \prime \beta }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime \beta }R_{Ri}^{\prime \prime } \right]}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===IPL-Weibull Example===&lt;br /&gt;
{{:Inverse_Power_Law_Example}}&lt;br /&gt;
&lt;br /&gt;
=IPL-Lognormal=&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; for the Inverse Power Law relationship and the lognormal distribution is given next.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\overline{{{T}&#039;}}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
T&#039;=ln(T)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; = times-to-failure.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\overline{T}&#039;\,\!&amp;lt;/math&amp;gt; = mean of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\sigma_{T&#039;}\,\!&amp;lt;/math&amp;gt; = standard deviation of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
The median of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=e^{\overline{T}&#039;}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The IPL-lognormal model &#039;&#039;pdf&#039;&#039; can be obtained first by setting &amp;lt;math&amp;gt;\breve{T}=L(V)\,\!&amp;lt;/math&amp;gt; in the lognormal &#039;&#039;pdf&#039;&#039;. Therefore:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt; \breve{T}=L(V)=\frac{1}{K \cdot V^n}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;e^{\overline{T&#039;}}=\frac{1}{K \cdot V^n}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}&#039;=-ln(K)-n ln(V) \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So the IPL-lognormal model &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T,V)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;+ln(K)+n ln(V)}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==IPL-Lognormal Statistical Properties Summary==&lt;br /&gt;
===The Mean===&lt;br /&gt;
The mean life of the IPL-lognormal model (mean of the times-to-failure), &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\bar{T}=\ {{e}^{\bar{{T}&#039;}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}}= {{e}^{{-ln(K)-nln(V)}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The mean of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\bar{T}}^{^{\prime }}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{T}}^{\prime }}=\ln \left( {\bar{T}} \right)-\frac{1}{2}\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Standard Deviation===&lt;br /&gt;
The standard deviation of the IPL-lognormal model (standard deviation of the times-to-failure), &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{\sigma }_{T}}= &amp;amp; \sqrt{\left( {{e}^{2\bar{{T}&#039;}+\sigma _{{{T}&#039;}}^{2}}} \right)\,\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)} = \sqrt{\left( {{e}^{2\left( -\ln (K)-n\ln (V) \right)+\sigma _{{{T}&#039;}}^{2}}} \right)\,\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The standard deviation of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\sqrt{\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The Mode===&lt;br /&gt;
The mode of the IPL-lognormal model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}={{e}^{{\bar{T}}&#039;-\sigma _{{{T}&#039;}}^{2}}}={{e}^{-\ln (K)-n\ln (V)-\sigma _{{{T}&#039;}}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===IPL-Lognormal Reliability===&lt;br /&gt;
The reliability for a mission of time T, starting at age 0, for the IPL-lognormal model is determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,\,V)=\int_{T}^{\infty }f(t,\,V)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,\,V)=\int_{{{T}^{^{\prime }}}}^{\infty }\frac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t+\ln (K)+n\ln (V)}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reliable Life===&lt;br /&gt;
The reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;T_{R}^{\prime }=-\ln (K)-n\ln (V)+z\cdot {{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z={{\Phi }^{-1}}\left[ F\left( T_{R}^{\prime },\,V \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;,\,V)}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;{T}&#039;=\ln (T)\,\!&amp;lt;/math&amp;gt; the reliable life, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}={{e}^{T_{R}^{\prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Lognormal Failure Rate===&lt;br /&gt;
The lognormal failure rate is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (T,\,V)=\frac{f(T,\,V)}{R(T,\,V)}=\frac{\tfrac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;+\ln (K)+n\ln (V)}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}}{\int_{{{T}&#039;}}^{\infty }\tfrac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;+\ln (K)+n\ln (V)}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
===Maximum Likelihood Estimation Method===&lt;br /&gt;
The complete IPL-lognormal log-likelihood function is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{{{\sigma }_{{{T}&#039;}}}{{T}_{i}}}\varphi \left( \frac{\ln \left( {{T}_{i}} \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right) \right] \text{ }+\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\ln \left[ 1-\Phi \left( \frac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right) \right] +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Li}^{\prime \prime }=\frac{\ln T_{Li}^{\prime \prime }+\ln K+n\ln {{V}_{i}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Ri}^{\prime \prime }=\frac{\ln T_{Ri}^{\prime \prime }+\ln K+n\ln {{V}_{i}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
* &amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{i}^{th}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;s_{T&#039;}\,\!&amp;lt;/math&amp;gt; is the standard deviation of the natural logarithm of the times-to-failure (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; is the IPL parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the second IPL parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;Vi\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{i}^{th}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;Ti\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{i}^{th}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;N&#039;_i\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{i}^{th}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;T^{&#039;}_{i}\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{i}^{th}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt; {{\hat {\sigma}}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\hat {K}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\hat {n}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}=0,\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial K}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\ \ :\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   \frac{\partial \Lambda }{\partial K}= &amp;amp; -\frac{1}{K\cdot \sigma _{{{T}&#039;}}^{2}}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}(\ln ({{T}_{i}})+\ln (K)+n\ln ({{V}_{i}})) \ -\frac{1}{K\cdot {{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{\varphi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{+\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\phi (z_{Ri}^{\prime \prime })-\phi (z_{Li}^{\prime \prime })}{K\sigma _{T}^{\prime }(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))} \\ &lt;br /&gt;
  \frac{\partial \Lambda }{\partial n}= &amp;amp; -\frac{1}{\sigma _{{{T}&#039;}}^{2}}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln ({{V}_{i}})\left[ \ln ({{T}_{i}})+\ln (K)+n\ln ({{V}_{i}}) \right] -\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\ln ({{V}_{i}})\frac{\varphi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)} +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\ln {{V}_{i}}\left( \phi (z_{Ri}^{\prime \prime })-\phi (z_{Li}^{\prime \prime }) \right)}{\sigma _{T}^{\prime }(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))} \\ &lt;br /&gt;
  \frac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}= &amp;amp; \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left( \frac{{{\left( \ln ({{T}_{i}})+\ln (K)+n\ln ({{V}_{i}}) \right)}^{2}}}{\sigma _{{{T}&#039;}}^{3}}-\frac{1}{{{\sigma }_{{{T}&#039;}}}} \right) \ +\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{\left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)\,\varphi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln (K)+n\ln ({{V}_{i}})}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{z_{Ri}^{\prime \prime }\phi (z_{Ri}^{\prime \prime })-z_{Li}^{\prime \prime }\phi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\varphi \left( x \right)=\frac{1}{\sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( x \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=IPL and the Coffin-Manson Relationship=&lt;br /&gt;
In accelerated life testing analysis, thermal cycling is commonly treated as a low-cycle fatigue problem, using the inverse power law relationship. Coffin and Manson suggested that the number of cycles-to-failure of a metal subjected to thermal cycling is given by Nelson [[Appendix_E:_References|[28]]]:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;N=\frac{C}{{{\left( \Delta T \right)}^{\gamma }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of cycles to failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is a constant, characteristic of the metal.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\gamma \,\!&amp;lt;/math&amp;gt; is another constant, also characteristic of the metal.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\Delta T\,\!&amp;lt;/math&amp;gt; is the range of the thermal cycle.&lt;br /&gt;
&lt;br /&gt;
This relationship is basically the inverse power law relationship, where the stress &amp;lt;math&amp;gt;V,\,\!&amp;lt;/math&amp;gt; is substituted by the range &amp;lt;math&amp;gt;\Delta V\,\!&amp;lt;/math&amp;gt;. This is an attempt to simplify the analysis of a time-varying stress test by using a constant stress model.  It is a very commonly used methodology for thermal cycling and mechanical fatigue tests. However, by performing such a simplification, the following assumptions and shortcomings are inevitable. First, the acceleration effects due to the stress rate of change are ignored. In other words, it is assumed that the failures are accelerated by the stress difference and not by how rapidly this difference occurs. Secondly, the acceleration effects due to stress relaxation and creep are ignored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this example the use of the Coffin-Manson relationship will be illustrated. This is a very simple example which can be repeated at any time. The reader is encouraged to perform this test.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Product:&#039;&#039;&#039;	         ACME Paper Clip Model 4456&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Reliability Target:&#039;&#039;&#039;	 99% at a 90% confidence after 30 cycles of 45º&lt;br /&gt;
&lt;br /&gt;
After consulting with our paper-clip engineers, the acceleration stress was determined to be the angle to which the clips are bent. Two bend stresses of 90º and 180º were used. A sample of six paper clips was tested to failure at both 90º and 180º bends with the following data obtained.&lt;br /&gt;
&lt;br /&gt;
[[Image:chp8degrees2cyclesTbl.png|center|300px|]]&lt;br /&gt;
&lt;br /&gt;
The test was performed as shown in the next figures (a side-view of the paper-clip is shown).&lt;br /&gt;
      &lt;br /&gt;
[[Image:90degrees.png|center|300px|]]&lt;br /&gt;
&lt;br /&gt;
Using the IPL-lognormal model, determine whether the reliability target was met.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
By using the IPL relationship to analyze the data, we are actually using a constant stress model to analyze a cycling process. Caution must be exercised when performing the test. The rate of change in the angle must be constant and equal for both the 90º and 180º bends and constant and equal to the rate of change in the angle for the use life of 45º  bend. Rate effects are influencing the life of the paper clip. By keeping the rate constant and equal at all stress levels, we can then eliminate these rate effects from our analysis. Otherwise the analysis will not be valid. &lt;br /&gt;
&lt;br /&gt;
The data were entered and analyzed using ReliaSoft&#039;s ALTA.&lt;br /&gt;
&lt;br /&gt;
[[Image:90-180time.png|center|675px|]]&lt;br /&gt;
 &lt;br /&gt;
The parameters of the IPL-lognormal model were estimated to be:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \sigma = &amp;amp; 0.198533 \\ &lt;br /&gt;
 &amp;amp; K= &amp;amp; 0.000012 \\ &lt;br /&gt;
 &amp;amp; n= &amp;amp; 1.856808  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using the QCP, the 90% lower 1-sided confidence bound on reliability after 30 cycles for a 45º bend was estimated to be &amp;lt;math&amp;gt;99.6%\,\!&amp;lt;/math&amp;gt;, as shown below.&lt;br /&gt;
&lt;br /&gt;
[[Image:stdprobqcp.png|center|500px|]]&lt;br /&gt;
&lt;br /&gt;
This meets the target reliability of 99%.&lt;br /&gt;
&lt;br /&gt;
=IPL Confidence Bounds=&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==Approximate Confidence Bounds on IPL-Exponential==&lt;br /&gt;
===Confidence Bounds on the Mean Life===&lt;br /&gt;
From the inverse power law relationship the mean life for the exponential distribution is given by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt;. The upper &amp;lt;math&amp;gt;({{m}_{U}})\,\!&amp;lt;/math&amp;gt; and lower &amp;lt;math&amp;gt;({{m}_{L}})\,\!&amp;lt;/math&amp;gt; bounds on the mean life (ML estimate of the mean life) are estimated by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{U}}=\widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{L}}=\widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level, then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds, and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds. The variance of &amp;lt;math&amp;gt;\widehat{m}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{m})= &amp;amp; {{\left( \frac{\partial m}{\partial K} \right)}^{2}}Var(\widehat{K})+{{\left( \frac{\partial m}{\partial n} \right)}^{2}}Var(\widehat{n}) +2\left( \frac{\partial m}{\partial K} \right)\left( \frac{\partial m}{\partial n} \right)Cov(\widehat{K},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var(\widehat{m})=\frac{1}{{{\widehat{K}}^{2}}{{V}^{2\widehat{n}}}}\left[ \frac{1}{{{\widehat{K}}^{2}}}Var(\widehat{K})+{{\left[ \ln (V) \right]}^{2}}Var(\widehat{n})+\frac{2\ln (V)}{\widehat{K}}Cov(\widehat{K},\widehat{n}) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariance of &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are estimated from the Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{K},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{K}) &amp;amp; Cov(\widehat{K},\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{n},\widehat{K}) &amp;amp; Var(\widehat{n})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{K}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial K\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial K} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{n}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The bounds on reliability at a given time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, are estimated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{U}}}}} \\ &lt;br /&gt;
 &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{L}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{T}=-\widehat{m}\cdot \ln (R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding confidence bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; -{{m}_{U}}\cdot \ln (R) \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; -{{m}_{L}}\cdot \ln (R)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds on IPL-Weibull==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
Using the same approach as previously discussed ( &amp;lt;math&amp;gt;\widehat{\beta }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{K}\,\!&amp;lt;/math&amp;gt; positive parameters): &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\beta }_{U}}= &amp;amp; \widehat{\beta }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{\beta }_{L}}= &amp;amp; \widehat{\beta }\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{K}_{U}}= &amp;amp; \widehat{K}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{K})}}{\widehat{K}}}} \\ &lt;br /&gt;
 &amp;amp; {{K}_{L}}= &amp;amp; \widehat{K}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{K})}}{\widehat{K}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{n}_{U}}= &amp;amp; \widehat{n}+{{K}_{\alpha }}\sqrt{Var(\widehat{n})} \\ &lt;br /&gt;
 &amp;amp; {{n}_{L}}= &amp;amp; \widehat{n}-{{K}_{\alpha }}\sqrt{Var(\widehat{n})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;K,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{K},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{n})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{\beta }) &amp;amp; Cov(\widehat{\beta },\widehat{K}) &amp;amp; Cov(\widehat{\beta },\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{K},\widehat{\beta }) &amp;amp; Var(\widehat{K}) &amp;amp; Cov(\widehat{K},\widehat{n})  \\&lt;br /&gt;
   Cov(\widehat{n},\widehat{\beta }) &amp;amp; Cov(\widehat{n},\widehat{K}) &amp;amp; Var(\widehat{n})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\beta }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial K} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial K\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{K}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial K\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{n}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The reliability function (ML estimate) for the IPL-Weibull model is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-{{\left( \widehat{K}{{V}^{\widehat{n}}}T \right)}^{\widehat{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-{{e}^{\ln \left[ {{\left( \widehat{K}{{V}^{\widehat{n}}}T \right)}^{\widehat{\beta }}} \right]}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Setting: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\ln \left[ \left( \widehat{K}V\widehat{^{n}}T \right)\widehat{^{\beta }} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\widehat{\beta }\left[ \ln (T)+\ln (\widehat{K})+\widehat{n}\ln (V) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-e\widehat{^{u}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to find the upper and lower bounds on &amp;lt;math&amp;gt;\widehat{u}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial K} \right)}^{2}}Var(\widehat{K}) +{{\left( \frac{\partial \widehat{u}}{\partial n} \right)}^{2}}Var(\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial K} \right)Cov(\widehat{\beta },\widehat{K})\\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{\beta },\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial K} \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{K},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{u})= &amp;amp; {{\left( \frac{\widehat{u}}{\widehat{\beta }} \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\widehat{\beta }}{\widehat{K}} \right)}^{2}}Var(\widehat{K}) +{{\widehat{\beta }}^{2}}{{\left[ \ln (V) \right]}^{2}}Var(\widehat{n}) +\frac{2\widehat{u}}{\widehat{K}}Cov(\widehat{\beta },\widehat{K})+2\widehat{u}\ln (V)Cov(\widehat{\beta },\widehat{n})+\frac{2{{\widehat{\beta }}^{2}}\ln (V)}{\widehat{K}}Cov(\widehat{K},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{L}} \right)}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{U}} \right)}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time for a given reliability (ML estimate of time) are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (R) &amp;amp;=\ -{{\left( \widehat{K}{{V}^{\widehat{n}}}\widehat{T} \right)}^{\widehat{\beta }}} \\ &lt;br /&gt;
  \ln (-\ln (R)) &amp;amp;=\  \widehat{\beta }\left[ \ln (\widehat{T})+\ln (\widehat{K})+\widehat{n}\ln (V) \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\frac{1}{\widehat{\beta }}\ln (-\ln (R))-\ln (\widehat{K})-\widehat{n}\ln (V)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\widehat{u}=\ln \widehat{T}.\,\!&amp;lt;/math&amp;gt; The upper and lower bounds on &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{u}_{U}}= &amp;amp; \widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})} \\ &lt;br /&gt;
 &amp;amp; {{u}_{L}}= &amp;amp; \widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial K} \right)}^{2}}Var(\widehat{K}) +{{\left( \frac{\partial \widehat{u}}{\partial n} \right)}^{2}}Var(\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial K} \right)Cov(\widehat{\beta },\widehat{K}) \\ &lt;br /&gt;
 &amp;amp;  +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{\beta },\widehat{n}) +2\left( \frac{\partial \widehat{u}}{\partial K} \right)\left( \frac{\partial \widehat{u}}{\partial n} \right)Cov(\widehat{K},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{u})= &amp;amp; \frac{1}{{{\widehat{\beta }}^{4}}}{{\left[ \ln (-\ln (R)) \right]}^{2}}Var(\widehat{\beta })+\frac{1}{{{\widehat{K}}^{2}}}Var(\widehat{K}) +{{\left[ \ln (V) \right]}^{2}}Var(\widehat{n}) +\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}\widehat{K}}Cov(\widehat{\beta },\widehat{K}) \\ &lt;br /&gt;
 &amp;amp;  +\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}}\ln (V)Cov(\widehat{\beta },\widehat{n}) +\frac{2\ln (V)}{\widehat{K}}Cov(\widehat{K},\widehat{n})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on time are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{{{u}_{U}}}} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{{{u}_{L}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds on IPL-Lognormal==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
Since the standard deviation, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{T}}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\widehat{K}\,\!&amp;lt;/math&amp;gt; are positive parameters, then &amp;lt;math&amp;gt;\ln ({{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\ln (\widehat{K})\,\!&amp;lt;/math&amp;gt; are treated as normally distributed, and the bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{\sigma }_{U}}=\ &amp;amp; {{\widehat{\sigma }}_{{{T}&#039;}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}} &amp;amp;\text{ (Upper bound)} \\ &lt;br /&gt;
 {{\sigma }_{L}}=\ &amp;amp; \frac{{{\widehat{\sigma }}_{{{T}&#039;}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}} &amp;amp;\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{K}_{U}}=\ &amp;amp; \widehat{K}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{K})}}{\widehat{K}}}} &amp;amp;\text{ (Upper bound)} \\ &lt;br /&gt;
  {{K}_{L}}=\ &amp;amp; \frac{\widehat{K}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{K})}}{\widehat{K}}}}} &amp;amp;\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The lower and upper bounds on &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{n}_{U}}= &amp;amp; \widehat{n}+{{K}_{\alpha }}\sqrt{Var(\widehat{n})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{n}_{L}}= &amp;amp; \widehat{n}-{{K}_{\alpha }}\sqrt{Var(\widehat{n})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;A,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}}),\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var({{\widehat{\sigma }}_{{{T}&#039;}}}) &amp;amp; Cov(\widehat{K},{{\widehat{\sigma }}_{{{T}&#039;}}}) &amp;amp; Cov(\widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}})  \\&lt;br /&gt;
   Cov({{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{K}) &amp;amp; Var(\widehat{K}) &amp;amp; Cov(\widehat{K},\widehat{n})  \\&lt;br /&gt;
   Cov({{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{n}) &amp;amp; Cov(\widehat{n},\widehat{K}) &amp;amp; Var\left( \widehat{n} \right)  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ F \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F=\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma _{{{T}&#039;}}^{2}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial K} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial K\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{K}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial K\partial n}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial n\partial K} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{n}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bounds on Reliability===&lt;br /&gt;
The reliability of the lognormal distribution is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({T}&#039;,V;K,n,{{\sigma }_{{{T}&#039;}}})=\int_{{{T}&#039;}}^{\infty }\frac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t+\ln (\widehat{K})+\widehat{n}\ln (V)}{{{\widehat{\sigma }}_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\widehat{z}(t,V;K,n,{{\sigma }_{T}})=\tfrac{t+\ln (\widehat{K})+\widehat{n}\ln (V)}{{{\widehat{\sigma }}_{{{T}&#039;}}}},\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;\tfrac{d\widehat{z}}{dt}=\tfrac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;t={T}&#039;\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{z}=\tfrac{{T}&#039;+\ln (\widehat{K})+\widehat{n}\ln (V)}{{{\widehat{\sigma }}_{{{T}&#039;}}}}\,\!&amp;lt;/math&amp;gt;, and for &amp;lt;math&amp;gt;t=\infty ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{z}=\infty .\,\!&amp;lt;/math&amp;gt; The above equation then becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(\widehat{z})=\int_{\widehat{z}({T}&#039;,V)}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The bounds on &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{z}_{U}}= &amp;amp; \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ &lt;br /&gt;
 &amp;amp; {{z}_{L}}= &amp;amp; \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var(\widehat{z})= &amp;amp; \left( \frac{\partial \widehat{z}}{\partial K} \right)_{\widehat{K}}^{2}Var(\widehat{K})+\left( \frac{\partial \widehat{z}}{\partial n} \right)_{\widehat{n}}^{2}Var(\widehat{n})+\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)_{{{\widehat{\sigma }}_{{{T}&#039;}}}}^{2}Var({{\widehat{\sigma }}_{T}}) +2{{\left( \frac{\partial \widehat{z}}{\partial K} \right)}_{\widehat{K}}}{{\left( \frac{\partial \widehat{z}}{\partial n} \right)}_{\widehat{n}}}Cov\left( \widehat{K},\widehat{n} \right) \\ &lt;br /&gt;
 &amp;amp;  +2{{\left( \frac{\partial \widehat{z}}{\partial K} \right)}_{\widehat{K}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{K},{{\widehat{\sigma }}_{T}} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial n} \right)}_{\widehat{n}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{n},{{\widehat{\sigma }}_{T}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{z})= &amp;amp; \frac{1}{\widehat{\sigma }_{{{T}&#039;}}^{2}}[\frac{1}{{{K}^{2}}}Var(\widehat{K})+\ln {{(V)}^{2}}Var(\widehat{n})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2\ln (V)}{K}Cov\left( \widehat{K},\widehat{n} \right)-\frac{2\widehat{z}}{K}Cov\left( \widehat{K},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)-2\widehat{z}\ln (V)Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds around time, for a given lognormal percentile (unreliability), are estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{T}&#039;(V;\widehat{K},\widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}})=-\ln (\widehat{K})-\widehat{n}\ln (V)+z\cdot {{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {T}&#039;(V;\widehat{K},\widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}})=\  \ln (T) \\&lt;br /&gt;
 \\&lt;br /&gt;
 &amp;amp; z=\  {{\Phi }^{-1}}\left[ F({T}&#039;) \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;)}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to calculate the variance of &amp;lt;math&amp;gt;{T}&#039;(V;\widehat{K},\widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}}):\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var({T}&#039;)= &amp;amp; {{\left( \frac{\partial {T}&#039;}{\partial K} \right)}^{2}}Var(\widehat{K})+{{\left( \frac{\partial {T}&#039;}{\partial n} \right)}^{2}}Var(\widehat{n})+{{\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2\left( \frac{\partial {T}&#039;}{\partial K} \right)\left( \frac{\partial {T}&#039;}{\partial n} \right)Cov\left( \widehat{K},\widehat{n} \right) \\ &lt;br /&gt;
 &amp;amp;  +2\left( \frac{\partial {T}&#039;}{\partial K} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{K},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2\left( \frac{\partial {T}&#039;}{\partial n} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var({T}&#039;)= &amp;amp; \frac{1}{{{K}^{2}}}Var(\widehat{K})+\ln {{(V)}^{2}}Var(\widehat{n})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2\ln (V)}{K}Cov\left( \widehat{K},\widehat{n} \right) \\ &lt;br /&gt;
 &amp;amp;  -\frac{2\widehat{z}}{K}Cov\left( \widehat{K},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) -2\widehat{z}\ln (V)Cov\left( \widehat{n},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; T_{U}^{\prime }= &amp;amp; \ln {{T}_{U}}={T}&#039;+{{K}_{\alpha }}\sqrt{Var({T}&#039;)} \\ &lt;br /&gt;
 &amp;amp; T_{L}^{\prime }= &amp;amp; \ln {{T}_{L}}={T}&#039;-{{K}_{\alpha }}\sqrt{Var({T}&#039;)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for &amp;lt;math&amp;gt;{{T}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{T}_{L}}\,\!&amp;lt;/math&amp;gt; yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{T_{U}^{\prime }}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{T_{L}^{\prime }}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Eyring_Relationship&amp;diff=64928</id>
		<title>Eyring Relationship</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Eyring_Relationship&amp;diff=64928"/>
		<updated>2017-02-08T21:07:06Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional Reliability Function */ changed R(t,T,V) to R((t|T),V)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|5}}&lt;br /&gt;
&lt;br /&gt;
The Eyring relationship was formulated from quantum mechanics principles, as discussed in Glasstone et al. [[Appendix_E:_References|[9]]], and is most often used when thermal stress (temperature) is the acceleration variable. However, the Eyring relationship is also often used for stress variables other than temperature, such as humidity. The relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; represents a quantifiable life measure, such as mean life, characteristic life, median life, &amp;lt;math&amp;gt;B(x)\,\!&amp;lt;/math&amp;gt; life, etc.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; represents the stress level (&#039;&#039;&#039;temperature values are in absolute units: kelvin or degrees Rankine&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is one of the model parameters to be determined.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is another model parameter to be determined.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA7.1.png|center|400px|Graphical look at the Eyring relationship (linear scale), at different life characteristics and with a Weibull life distribution.]]&lt;br /&gt;
&lt;br /&gt;
The Eyring relationship is similar to the Arrhenius relationship. This similarity is more apparent if it is rewritten in the following way: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L(V)=\ &amp;amp; \frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}} =\ &amp;amp; \frac{{{e}^{-A}}}{V}{{e}^{\tfrac{B}{V}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=\frac{1}{V}Const.\cdot {{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Arrhenius relationship is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=C\cdot {{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comparing the above equation to the Arrhenius relationship, it can be seen that the only difference between the two relationships is the &amp;lt;math&amp;gt;\tfrac{1}{V}\,\!&amp;lt;/math&amp;gt; term above. In general, both relationships yield very similar results. Like the Arrhenius, the Eyring relationship is plotted on a log-reciprocal paper.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA7.2.png|center|400px|Eyring relationship plotted on Arrhenius paper.]]&lt;br /&gt;
&lt;br /&gt;
===Acceleration Factor===&lt;br /&gt;
For the Eyring model the acceleration factor is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{F}}=\frac{{{L}_{USE}}}{{{L}_{Accelerated}}}=\frac{\tfrac{1}{{{V}_{u}}}\text{ }{{e}^{-\left( A-\tfrac{B}{{{V}_{u}}} \right)}}}{\tfrac{1}{{{V}_{A}}}\text{ }{{e}^{-\left( A-\tfrac{B}{{{V}_{A}}} \right)}}}=\frac{\text{ }{{e}^{\tfrac{B}{{{V}_{u}}}}}}{\text{ }{{e}^{\tfrac{B}{{{V}_{A}}}}}}=\frac{{{V}_{A}}}{{{V}_{u}}}{{e}^{B\left( \tfrac{1}{{{V}_{u}}}-\tfrac{1}{{{V}_{A}}} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Eyring-Exponential=&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the 1-parameter exponential distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\lambda \cdot {{e}^{-\lambda \cdot t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be easily shown that the mean life for the 1-parameter exponential distribution (presented in detail [[Distributions Used in Accelerated Testing#The Exponential Distribution|here]]) is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda =\frac{1}{m}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
thus:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{1}{m}\cdot {{e}^{-\tfrac{t}{m}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Eyring-exponential model &#039;&#039;pdf&#039;&#039; can then be obtained by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;m=L(V)=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and substituting for &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; in the exponential &#039;&#039;pdf&#039;&#039; equation:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}{{e}^{-V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}\cdot t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Eyring-Exponential Statistical Properties Summary==&lt;br /&gt;
====Mean or MTTF====&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T},\,\!&amp;lt;/math&amp;gt; or Mean Time To Failure (MTTF) for the Eyring-exponential is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \overline{T}= &amp;amp; \int_{0}^{\infty }t\cdot f(t,V)dt=\int_{0}^{\infty }t\cdot V{{e}^{\left( A-\tfrac{B}{V} \right)}}{{e}^{-tV{{e}^{\left( A-\tfrac{B}{V} \right)}}}}dt =\   \frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Median====&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Eyring-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=0.693\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Mode====&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Eyring-exponential model is &amp;lt;math&amp;gt;\tilde{T}=0.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Standard Deviation====&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, for the Eyring-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Eyring-Exponential Reliability Function====&lt;br /&gt;
The Eyring-exponential reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)={{e}^{-T\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function is the complement of the Eyring-exponential cumulative distribution function or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=1-Q(T,V)=1-\int_{0}^{T}f(T,V)dT\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=1-\int_{0}^{T}V{{e}^{\left( A-\tfrac{B}{V} \right)}}{{e}^{-T\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}}}dT={{e}^{-T\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Conditional Reliability====&lt;br /&gt;
The conditional reliability function for the Eyring-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-\lambda (T+t)}}}{{{e}^{-\lambda T}}}={{e}^{-t\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Reliable Life====&lt;br /&gt;
For the Eyring-exponential model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R,}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({{t}_{R}},V)={{e}^{-{{t}_{R}}\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln [R({{t}_{R}},V)]=-{{t}_{R}}\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=-\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\ln [R({{t}_{R}},V)]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
====Maximum Likelihood Estimation Method====&lt;br /&gt;
The complete exponential log-likelihood function of the Eyring model is composed of two summation portions:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ {{V}_{i}}\cdot {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}{{e}^{-{{V}_{i}}\cdot {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}\cdot {{T}_{i}}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\cdot {{V}_{i}}\cdot {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}\cdot T_{i}^{\prime }+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-T_{Li}^{\prime \prime }{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-T_{Ri}^{\prime \prime }{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the Eyring parameter (unknown, the first of two parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the second Eyring parameter (unknown, the second of two parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;\widehat{A}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{B}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial A}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0\,\!&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial A}= &amp;amp; \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left( 1-{{V}_{i}}\cdot {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}{{T}_{i}} \right)-\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{V}_{i}}\cdot {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}T_{i}^{\prime } \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\left( T_{Li}^{\prime \prime }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime }R_{Ri}^{\prime \prime } \right){{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}}}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial B}= &amp;amp; \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left[ {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}{{T}_{i}}-\frac{1}{{{V}_{i}}} \right]+\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\cdot {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}T_{i}^{\prime } \overset{FI}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\left( T_{Li}^{\prime \prime }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime }R_{Ri}^{\prime \prime } \right){{e}^{A-\tfrac{B}{{{V}_{i}}}}}}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Eyring-Weibull=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Eyring Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; for 2-parameter Weibull distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scale parameter (or characteristic life) of the Weibull distribution is &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;. The Eyring-Weibull model &#039;&#039;pdf&#039;&#039; can then be obtained by setting &amp;lt;math&amp;gt;\eta =L(V)\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\eta =L(V)=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta }=V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting for &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; into the Weibull &#039;&#039;pdf&#039;&#039; yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=\beta \cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}{{\left( t\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta -1}}{{e}^{-{{\left( t\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Eyring-Weibull Statistical Properties Summary==&lt;br /&gt;
====Mean or MTTF====&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T}\,\!&amp;lt;/math&amp;gt;, or Mean Time To Failure (MTTF) for the Eyring-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\cdot \Gamma \left( \frac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Gamma \left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt; is the gamma function evaluated at the value of &amp;lt;math&amp;gt;\left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
====Median====&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Eyring-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}{{\left( \ln 2 \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Mode====&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Eyring-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}{{\left( 1-\frac{1}{\beta } \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Standard Deviation====&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}},\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Eyring-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\cdot \sqrt{\Gamma \left( \frac{2}{\beta }+1 \right)-{{\left( \Gamma \left( \frac{1}{\beta }+1 \right) \right)}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Eyring-Weibull Reliability Function====&lt;br /&gt;
The Eyring-Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)={{e}^{-{{\left( V\cdot T\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Conditional Reliability Function====&lt;br /&gt;
The Eyring-Weibull conditional reliability function at a specified stress level is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-{{\left( \left( T+t \right)\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta }}}}}{{{e}^{-{{\left( V\cdot T\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V)={{e}^{-\left[ {{\left( \left( T+t \right)\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta }}-{{\left( V\cdot T\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta }} \right]}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Reliable Life====&lt;br /&gt;
For the Eyring-Weibull model, the reliable life, &amp;lt;math&amp;gt;{{t}_{R}}\,\!&amp;lt;/math&amp;gt;, of a unit for a specified reliability and starting the mission at age zero is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}{{\left\{ -\ln \left[ R\left( {{T}_{R}},V \right) \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Eyring-Weibull Failure Rate Function====&lt;br /&gt;
The Eyring-Weibull failure rate function, &amp;lt;math&amp;gt;\lambda (T)\,\!&amp;lt;/math&amp;gt;, is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda \left( T,V \right)=\frac{f\left( T,V \right)}{R\left( T,V \right)}=\beta {{\left( T\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
====Maximum Likelihood Estimation Method====&lt;br /&gt;
The Eyring-Weibull log-likelihood function is composed of two summation portions:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \beta \cdot {{V}_{i}}\cdot {{e}^{A-\tfrac{B}{{{V}_{i}}}}}{{\left( {{T}_{i}}{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta -1}}{{e}^{-{{\left( {{T}_{i}}{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( {{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}}T_{i}^{\prime } \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-{{\left( T_{Li}^{\prime \prime }{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-{{\left( T_{Ri}^{\prime \prime }{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the Eyring parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the second Eyring parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \beta }=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial A}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
&amp;amp; \frac{\partial \Lambda }{\partial A}= &amp;amp; \beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}-\beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}{{\left( {{T}_{i}}{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }} -\beta \underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }{{\left( T_{i}^{\prime }{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }} \overset{FI}{\mathop{-\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\beta V_{i}^{\beta }{{e}^{A\beta -\tfrac{B\beta }{{{V}_{i}}}}}\left[ {{(T_{Li}^{\prime \prime })}^{\beta }}R_{Li}^{\prime \prime }-{{(T_{Ri}^{\prime \prime })}^{\beta }}R_{Ri}^{\prime \prime } \right]}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial B}= &amp;amp; -\beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\frac{1}{{{V}_{i}}}+\beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\frac{1}{{{V}_{i}}}{{\left( {{T}_{i}}{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }} +\beta \underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }\frac{1}{{{V}_{i}}}{{\left( T_{i}^{\prime }{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }} +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\beta V_{i}^{(\beta -1)}{{e}^{A\beta -\tfrac{B\beta }{{{V}_{i}}}}}\left[ {{(T_{Li}^{\prime \prime })}^{\beta }}R_{Li}^{\prime \prime }-{{(T_{Ri}^{\prime \prime })}^{\beta }}R_{Ri}^{\prime \prime } \right]}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\frac{\partial \Lambda}{\partial \beta}= &amp;amp; \frac{1}{\beta}\sum_{i=1}^{F_e} N_i\frac{1}{V_i}+\sum_{i=1}^{F_e} N_i ln\left(T_iV_i e^{A-\tfrac{B}{V_i}}\right)&lt;br /&gt;
-\sum_{i=1}^{F_e} N_i\left(T_iV_i e^{A-\tfrac{B}{V_i}}\right)^\beta ln\left(T_iV)i e^{A-\tfrac{B}{V_i}}\right)\\&lt;br /&gt;
&amp;amp; -\sum_{i=1}^S N_i^&#039;\left(T_i^&#039;V_I e^{A-\tfrac{B}{V_i}}\right)^\beta ln\left(T_iV)i e^{A-\tfrac{B}{V_i}}\right)&lt;br /&gt;
-\sum_{i=1}^{FI} N_i^{&#039;&#039;}V_i e^{A-\tfrac{B}{V_i}}\frac{R_{Li}^{&#039;&#039;} T_{Li}^{&#039;&#039;}\left(ln(T_{Li}^&#039; V_i)+A-\tfrac{B}{V_i}\right)-R_{Ri}^{&#039;&#039;} T_{Ri}^{&#039;&#039;}\left(ln(T_{Ri}^{&#039;&#039;} V_i)+A-\tfrac{B}{V_i}\right)}{R_{L_i}^{&#039;&#039;}-F_{Ri}^{&#039;&#039;}}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Eyring-Weibull Example===&lt;br /&gt;
{{:Eyring_Example}}&lt;br /&gt;
&lt;br /&gt;
=Eyring-Lognormal=&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\overline{{{T}&#039;}}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{T}&#039;=\ln (T) &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
T =\text{times-to-failure}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
*&amp;lt;math&amp;gt;\overline{{{T}&#039;}}=\,\!&amp;lt;/math&amp;gt; mean of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\,\!&amp;lt;/math&amp;gt; standard deviation of the natural logarithms of the times-to-failure. &lt;br /&gt;
&lt;br /&gt;
The Eyring-lognormal model can be obtained first by setting &amp;lt;math&amp;gt;\breve{T}=L(V)\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=L(V)=\frac{1}{V}{{e}^{-(A-\tfrac{B}{V})}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{e}^{{{\overline{T}}^{\prime }}}}=\frac{1}{V}{{e}^{-(A-\tfrac{B}{V})}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\overline{T}}^{\prime }}=-\ln (V)-A+\frac{B}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting this into the lognormal &#039;&#039;pdf&#039;&#039; yields the Eyring-lognormal model &#039;&#039;pdf&#039;&#039;:  &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T,V)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;+\ln (V)+A-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Eyring-Lognormal Statistical Properties Summary==&lt;br /&gt;
&lt;br /&gt;
====The Mean====&lt;br /&gt;
The mean life of the Eyring-lognormal model (mean of the times-to-failure), &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt;, is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
	   \bar{T}=\ {{e}^{\bar{{T}&#039;}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}} =\  {{e}^{-\ln (V)-A+\tfrac{B}{V}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
	\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
The mean of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\bar{T}}^{^{\prime }}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{T}}^{\prime }}=\ln \left( {\bar{T}} \right)-\frac{1}{2}\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Median====&lt;br /&gt;
The median of the Eyring-lognormal model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}={{e}^{{{\overline{T}}^{\prime }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Standard Deviation====&lt;br /&gt;
The standard deviation of the Eyring-lognormal model (standard deviation of the times-to-failure), &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma }_{T}}= &amp;amp; \sqrt{\left( {{e}^{2\bar{{T}&#039;}+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)} =\  \sqrt{\left( {{e}^{2\left( -\ln (V)-A+\tfrac{B}{V} \right)+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The standard deviation of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\sqrt{\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Mode====&lt;br /&gt;
The mode of the Eyring-lognormal model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \tilde{T}= &amp;amp; {{e}^{{{\overline{T}}^{\prime }}-\sigma _{{{T}&#039;}}^{2}}} =\  {{e}^{-\ln (V)-A+\tfrac{B}{V}-\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Eyring-Lognormal Reliability Function====&lt;br /&gt;
&lt;br /&gt;
The reliability for a mission of time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, starting at age 0, for the Eyring-lognormal model is determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=\int_{T}^{\infty }f(t,V)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=\int_{{{T}^{^{\prime }}}}^{\infty }\frac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t+\ln (V)+A-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no closed form solution for the lognormal reliability function. Solutions can be obtained via the use of standard normal tables. Since the application automatically solves for the reliability we will not discuss manual solution methods.&lt;br /&gt;
&lt;br /&gt;
====Reliable Life====&lt;br /&gt;
For the Eyring-lognormal model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;T_{R}^{\prime }=-\ln (V)-A+\frac{B}{V}+z\cdot {{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z={{\Phi }^{-1}}\left[ F\left( T_{R}^{\prime },V \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;,V)}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;{T}&#039;=\ln (T)\,\!&amp;lt;/math&amp;gt; the reliable life, &amp;lt;math&amp;gt;{{t}_{R,}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}={{e}^{T_{R}^{\prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Eyring-Lognormal Failure Rate====&lt;br /&gt;
The Eyring-lognormal failure rate is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (T,V)=\frac{f(T,V)}{R(T,V)}=\frac{\tfrac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;+\ln (V)+A-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}}{\int_{{{T}&#039;}}^{\infty }\tfrac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;+\ln (V)+A-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
====Maximum Likelihood Estimation Method====&lt;br /&gt;
The complete Eyring-lognormal log-likelihood function is composed of two summation portions:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{{{\sigma }_{{{T}&#039;}}}{{T}_{i}}}\phi \left( \frac{\ln \left( {{T}_{i}} \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] \text{ }+\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\ln \left[ 1-\Phi \left( \frac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Li}^{\prime \prime }=\frac{\ln T_{Li}^{\prime \prime }+\ln {{V}_{i}}+A-\tfrac{B}{{{V}_{i}}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Ri}^{\prime \prime }=\frac{\ln T_{Ri}^{\prime \prime }+\ln {{V}_{i}}+A-\tfrac{B}{{{V}_{i}}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; is the standard deviation of the natural logarithm of the times-to-failure (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the Eyring parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the second Eyring parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial A}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial A}= &amp;amp; -\frac{1}{\sigma _{{{T}&#039;}}^{2}}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}(\ln ({{T}_{i}})+\ln ({{V}_{i}})+A-\frac{B}{{{V}_{i}}}) -\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{\phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{+\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\varphi (z_{Ri}^{\prime \prime })-\varphi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial B}=  \frac{1}{\sigma _{{{T}&#039;}}^{2}}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\frac{1}{{{V}_{i}}}(\ln ({{T}_{i}})+\ln ({{V}_{i}})+A-\frac{B}{{{V}_{i}}}) +\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{1}{{{V}_{i}}}\frac{\phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\varphi (z_{Ri}^{\prime \prime })-\varphi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }{{V}_{i}}(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))} \\ &lt;br /&gt;
 &amp;amp; \frac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}= \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left( \frac{{{\left( \ln ({{T}_{i}})+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}} \right)}^{2}}}{\sigma _{{{T}&#039;}}^{3}}-\frac{1}{{{\sigma }_{{{T}&#039;}}}} \right) +\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{\left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)\phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{z_{Ri}^{\prime \prime }\varphi (z_{Ri}^{\prime \prime })-z_{Li}^{\prime \prime }\varphi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\phi \left( x \right)=\frac{1}{\sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( x \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generalized Eyring Relationship=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Generalized_Eyring_Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{:Generalized Eyring Relationship}}&lt;br /&gt;
&lt;br /&gt;
=Eyring Confidence Bounds=&lt;br /&gt;
==Approximate Confidence Bounds for the Eyring-Exponential==&lt;br /&gt;
===Confidence Bounds on Mean Life===&lt;br /&gt;
The mean life for the Eyring relationship is given by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt;. The upper &amp;lt;math&amp;gt;({{m}_{U}})\,\!&amp;lt;/math&amp;gt; and lower &amp;lt;math&amp;gt;({{m}_{L}})\,\!&amp;lt;/math&amp;gt; bounds on the mean life (ML estimate of the mean life) are estimated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{U}}=\widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{L}}=\widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level, then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds, and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds. The variance of &amp;lt;math&amp;gt;\widehat{m}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{m})= &amp;amp; {{\left( \frac{\partial m}{\partial A} \right)}^{2}}Var(\widehat{A})+{{\left( \frac{\partial m}{\partial B} \right)}^{2}}Var(\widehat{B}) +2\left( \frac{\partial m}{\partial A} \right)\left( \frac{\partial m}{\partial B} \right)Cov(\widehat{A},\widehat{B})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var(\widehat{m})=\frac{1}{{{V}^{2}}}{{e}^{-2\left( \widehat{A}-\tfrac{\widehat{B}}{V} \right)}}\left[ Var(\widehat{A})+\frac{1}{{{V}^{2}}}Var(\widehat{B})-\frac{1}{V}Cov(\widehat{A},\widehat{B}) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariance of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{A}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{B})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{A}) &amp;amp; Cov(\widehat{A},\widehat{B})  \\&lt;br /&gt;
   Cov(\widehat{B},\widehat{A}) &amp;amp; Var(\widehat{B})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial B}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The bounds on reliability at a given time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, are estimated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{U}}}}} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{L}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{T}=-\widehat{m}\cdot \ln (R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding confidence bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; -{{m}_{U}}\cdot \ln (R) \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; -{{m}_{L}}\cdot \ln (R)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the Eyring-Weibull==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
From the asymptotically normal property of the maximum likelihood estimators, and since &amp;lt;math&amp;gt;\widehat{\beta }\,\!&amp;lt;/math&amp;gt; is a positive parameter, &amp;lt;math&amp;gt;\ln (\widehat{\beta })\,\!&amp;lt;/math&amp;gt; can then be treated as normally distributed. After performing this transformation, the bounds on the parameters are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\beta }_{U}}= &amp;amp; \widehat{\beta }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{\beta }_{L}}= &amp;amp; \widehat{\beta }\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
also:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{A}_{U}}= &amp;amp; \widehat{A}+{{K}_{\alpha }}\sqrt{Var(\widehat{A})} \\ &lt;br /&gt;
 &amp;amp; {{A}_{L}}= &amp;amp; \widehat{A}-{{K}_{\alpha }}\sqrt{Var(\widehat{A})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{B}_{U}}= &amp;amp; \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})} \\ &lt;br /&gt;
 &amp;amp; {{B}_{L}}= &amp;amp; \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{B})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; are estimated from the Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{\beta }) &amp;amp; Cov(\widehat{\beta },\widehat{A}) &amp;amp; Cov(\widehat{\beta },\widehat{B})  \\&lt;br /&gt;
   Cov(\widehat{A},\widehat{\beta }) &amp;amp; Var(\widehat{A}) &amp;amp; Cov(\widehat{A},\widehat{B})  \\&lt;br /&gt;
   Cov(\widehat{B},\widehat{\beta }) &amp;amp; Cov(\widehat{B},\widehat{A}) &amp;amp; Var(\widehat{B})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\beta }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial B}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial B}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The reliability function for the Eyring-Weibull model (ML estimate) is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-{{\left( T\cdot V\cdot {{e}^{\left( \widehat{A}-\tfrac{\widehat{B}}{V} \right)}} \right)}^{\widehat{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-{{e}^{\ln \left[ {{\left( T\cdot V\cdot {{e}^{\left( \widehat{A}-\tfrac{\widehat{B}}{V} \right)}} \right)}^{\widehat{\beta }}} \right]}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\ln \left[ {{\left( T\cdot V\cdot {{e}^{\left( \widehat{A}-\tfrac{\widehat{B}}{V} \right)}} \right)}^{\widehat{\beta }}} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\widehat{\beta }\left[ \ln (T)+\ln (V)+\widehat{A}-\frac{\widehat{B}}{V} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-e\widehat{^{u}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to find the upper and lower bounds on &amp;lt;math&amp;gt;\widehat{u}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial A} \right)}^{2}}Var(\widehat{A}) +{{\left( \frac{\partial \widehat{u}}{\partial B} \right)}^{2}}Var(\widehat{B}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial A} \right)Cov(\widehat{\beta },\widehat{A}) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial B} \right)Cov(\widehat{\beta },\widehat{B}) +2\left( \frac{\partial \widehat{u}}{\partial A} \right)\left( \frac{\partial \widehat{u}}{\partial B} \right)Cov(\widehat{A},\widehat{B})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{u})= &amp;amp; {{\left( \frac{\widehat{u}}{\widehat{\beta }} \right)}^{2}}Var(\widehat{\beta })+{{\widehat{\beta }}^{2}}Var(\widehat{A}) +{{\left( \frac{\widehat{\beta }}{V} \right)}^{2}}Var(\widehat{B}) +2\widehat{u}\cdot Cov(\widehat{\beta },\widehat{A})-\frac{2\widehat{u}}{V}Cov(\widehat{\beta },\widehat{B}) -\frac{2{{\widehat{\beta }}^{2}}}{V}Cov(\widehat{A},\widehat{B})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{L}} \right)}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{U}} \right)}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (R)&amp;amp;=\  -{{\left( \widehat{T}\cdot V\cdot {{e}^{\left( \widehat{A}-\tfrac{\widehat{B}}{V} \right)}} \right)}^{\widehat{\beta }}} \\ &lt;br /&gt;
  \ln (-\ln (R))&amp;amp;=\  \widehat{\beta }\left( \ln \widehat{T}+\ln V+\widehat{A}-\frac{\widehat{B}}{V} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\frac{1}{\widehat{\beta }}\ln (-\ln (R))-\ln V-\widehat{A}+\frac{\widehat{B}}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\widehat{u}=ln(\widehat{T})\,\!&amp;lt;/math&amp;gt;. The upper and lower bounds on &amp;lt;math&amp;gt;\widehat{u}\,\!&amp;lt;/math&amp;gt; are then estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial A} \right)}^{2}}Var(\widehat{A}) +{{\left( \frac{\partial \widehat{u}}{\partial B} \right)}^{2}}Var(\widehat{B}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial A} \right)Cov(\widehat{\beta },\widehat{A}) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial B} \right)Cov(\widehat{\beta },\widehat{B}) +2\left( \frac{\partial \widehat{u}}{\partial A} \right)\left( \frac{\partial \widehat{u}}{\partial B} \right)Cov(\widehat{A},\widehat{B})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{u})= \frac{1}{{{\widehat{\beta }}^{4}}}{{\left[ \ln (-\ln (R)) \right]}^{2}}Var(\widehat{\beta }) +Var(\widehat{A})+\frac{1}{{{V}^{2}}}Var(\widehat{B}) +\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}}Cov(\widehat{\beta },\widehat{A})-\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}V}Cov(\widehat{\beta },\widehat{B}) -\frac{2}{V}Cov(\widehat{A},\widehat{B})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on time are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{{{u}_{U}}}} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{{{u}_{L}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the Eyring-Lognormal==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
The lower and upper bounds on &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{A}_{U}}= &amp;amp; \widehat{A}+{{K}_{\alpha }}\sqrt{Var(\widehat{A})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{A}_{L}}= &amp;amp; \widehat{A}-{{K}_{\alpha }}\sqrt{Var(\widehat{A})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{B}_{U}}= &amp;amp; \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{B}_{L}}= &amp;amp; \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{B})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the standard deviation, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{T}&#039;,}}\,\!&amp;lt;/math&amp;gt; is a positive parameter, &amp;lt;math&amp;gt;\ln ({{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; is treated as normally distributed, and the bounds are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{\sigma }_{U}} &amp;amp;=\  {{\widehat{\sigma }}_{{{T}&#039;}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}\text{ (Upper bound)} \\ &lt;br /&gt;
 {{\sigma }_{L}} &amp;amp;=\  \frac{{{\widehat{\sigma }}_{{{T}&#039;}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;A,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left( \begin{matrix}&lt;br /&gt;
   Var\left( {{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{A} \right) &amp;amp; Var\left( \widehat{A} \right) &amp;amp; Cov\left( \widehat{A},\widehat{B} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{B} \right) &amp;amp; Cov\left( \widehat{B},\widehat{A} \right) &amp;amp; Var\left( \widehat{B} \right)  \\&lt;br /&gt;
\end{matrix} \right)={{[F]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F=\left( \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma _{{{T}&#039;}}^{2}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial B}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial B}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bounds on Reliability===&lt;br /&gt;
The reliability of the lognormal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({T}&#039;,V;A,B,{{\sigma }_{{{T}&#039;}}})=\int_{{{T}&#039;}}^{\infty }\frac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t+\ln (V)+\widehat{A}-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let  &amp;lt;math&amp;gt;\widehat{z}(t,V;A,B,{{\sigma }_{T}})=\tfrac{t+\ln (V)+\widehat{A}-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}},\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;\tfrac{d\widehat{z}}{dt}=\tfrac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;t={T}&#039;\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{z}=\tfrac{{T}&#039;+\ln (V)+\widehat{A}-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}\,\!&amp;lt;/math&amp;gt;, and for &amp;lt;math&amp;gt;t=\infty ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{z}=\infty .\,\!&amp;lt;/math&amp;gt; The above equation then becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(\widehat{z})=\int_{\widehat{z}({T}&#039;,V)}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The bounds on &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{z}_{U}}= &amp;amp; \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ &lt;br /&gt;
 &amp;amp; {{z}_{L}}= &amp;amp; \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{z})= &amp;amp; \left( \frac{\partial \widehat{z}}{\partial A} \right)_{\widehat{A}}^{2}Var(\widehat{A})+\left( \frac{\partial \widehat{z}}{\partial B} \right)_{\widehat{B}}^{2}Var(\widehat{B})+\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)_{{{\widehat{\sigma }}_{{{T}&#039;}}}}^{2}Var({{\widehat{\sigma }}_{T}}) +2{{\left( \frac{\partial \widehat{z}}{\partial A} \right)}_{\widehat{A}}}{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}Cov\left( \widehat{A},\widehat{B} \right) \\ &lt;br /&gt;
 &amp;amp; +2{{\left( \frac{\partial \widehat{z}}{\partial A} \right)}_{\widehat{A}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{A},{{\widehat{\sigma }}_{T}} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{B},{{\widehat{\sigma }}_{T}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{z})=  \frac{1}{\widehat{\sigma }_{{{T}&#039;}}^{2}}[Var(\widehat{A})+\frac{1}{{{V}^{2}}}Var(\widehat{B})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) -\frac{2}{V}Cov\left( \widehat{A},\widehat{B} \right)-2\widehat{z}Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)+\frac{2\widehat{z}}{V}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds around time for a given lognormal percentile (unreliability) are estimated by first solving the reliability equation with respect to time as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{T}&#039;(V;\widehat{A},\widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}})=-\ln (V)-\widehat{A}+\frac{\widehat{B}}{V}+z\cdot {{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {T}&#039;(V;\widehat{A},\widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}}) &amp;amp;=\  \ln (T) \\ &lt;br /&gt;
  z &amp;amp;=\  {{\Phi }^{-1}}\left[ F({T}&#039;) \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;)}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to calculate the variance of &amp;lt;math&amp;gt;{T}&#039;(V;\widehat{A},\widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}}):\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var({T}&#039;)= &amp;amp; {{\left( \frac{\partial {T}&#039;}{\partial A} \right)}^{2}}Var(\widehat{A})+{{\left( \frac{\partial {T}&#039;}{\partial B} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2\left( \frac{\partial {T}&#039;}{\partial A} \right)\left( \frac{\partial {T}&#039;}{\partial B} \right)Cov\left( \widehat{A},\widehat{B} \right) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial {T}&#039;}{\partial A} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2\left( \frac{\partial {T}&#039;}{\partial B} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var({T}&#039;)= Var(\widehat{A})+\frac{1}{V}Var(\widehat{B})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) -\frac{2}{V}Cov\left( \widehat{A},\widehat{B} \right) -2\widehat{z}Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +\frac{2\widehat{z}}{V}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; T_{U}^{\prime }= &amp;amp; \ln {{T}_{U}}={T}&#039;+{{K}_{\alpha }}\sqrt{Var({T}&#039;)} \\ &lt;br /&gt;
 &amp;amp; T_{L}^{\prime }= &amp;amp; \ln {{T}_{L}}={T}&#039;-{{K}_{\alpha }}\sqrt{Var({T}&#039;)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for &amp;lt;math&amp;gt;{{T}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{T}_{L}}\,\!&amp;lt;/math&amp;gt; yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{T_{U}^{\prime }}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{T_{L}^{\prime }}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Eyring_Relationship&amp;diff=64927</id>
		<title>Eyring Relationship</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Eyring_Relationship&amp;diff=64927"/>
		<updated>2017-02-08T21:06:20Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional Reliability */ changed R(t,T,V) to R((t|T),V)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|5}}&lt;br /&gt;
&lt;br /&gt;
The Eyring relationship was formulated from quantum mechanics principles, as discussed in Glasstone et al. [[Appendix_E:_References|[9]]], and is most often used when thermal stress (temperature) is the acceleration variable. However, the Eyring relationship is also often used for stress variables other than temperature, such as humidity. The relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; represents a quantifiable life measure, such as mean life, characteristic life, median life, &amp;lt;math&amp;gt;B(x)\,\!&amp;lt;/math&amp;gt; life, etc.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; represents the stress level (&#039;&#039;&#039;temperature values are in absolute units: kelvin or degrees Rankine&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is one of the model parameters to be determined.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is another model parameter to be determined.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA7.1.png|center|400px|Graphical look at the Eyring relationship (linear scale), at different life characteristics and with a Weibull life distribution.]]&lt;br /&gt;
&lt;br /&gt;
The Eyring relationship is similar to the Arrhenius relationship. This similarity is more apparent if it is rewritten in the following way: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   L(V)=\ &amp;amp; \frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}} =\ &amp;amp; \frac{{{e}^{-A}}}{V}{{e}^{\tfrac{B}{V}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=\frac{1}{V}Const.\cdot {{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Arrhenius relationship is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=C\cdot {{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comparing the above equation to the Arrhenius relationship, it can be seen that the only difference between the two relationships is the &amp;lt;math&amp;gt;\tfrac{1}{V}\,\!&amp;lt;/math&amp;gt; term above. In general, both relationships yield very similar results. Like the Arrhenius, the Eyring relationship is plotted on a log-reciprocal paper.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA7.2.png|center|400px|Eyring relationship plotted on Arrhenius paper.]]&lt;br /&gt;
&lt;br /&gt;
===Acceleration Factor===&lt;br /&gt;
For the Eyring model the acceleration factor is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{F}}=\frac{{{L}_{USE}}}{{{L}_{Accelerated}}}=\frac{\tfrac{1}{{{V}_{u}}}\text{ }{{e}^{-\left( A-\tfrac{B}{{{V}_{u}}} \right)}}}{\tfrac{1}{{{V}_{A}}}\text{ }{{e}^{-\left( A-\tfrac{B}{{{V}_{A}}} \right)}}}=\frac{\text{ }{{e}^{\tfrac{B}{{{V}_{u}}}}}}{\text{ }{{e}^{\tfrac{B}{{{V}_{A}}}}}}=\frac{{{V}_{A}}}{{{V}_{u}}}{{e}^{B\left( \tfrac{1}{{{V}_{u}}}-\tfrac{1}{{{V}_{A}}} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Eyring-Exponential=&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the 1-parameter exponential distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\lambda \cdot {{e}^{-\lambda \cdot t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be easily shown that the mean life for the 1-parameter exponential distribution (presented in detail [[Distributions Used in Accelerated Testing#The Exponential Distribution|here]]) is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda =\frac{1}{m}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
thus:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{1}{m}\cdot {{e}^{-\tfrac{t}{m}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Eyring-exponential model &#039;&#039;pdf&#039;&#039; can then be obtained by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;m=L(V)=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and substituting for &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; in the exponential &#039;&#039;pdf&#039;&#039; equation:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}{{e}^{-V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}\cdot t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Eyring-Exponential Statistical Properties Summary==&lt;br /&gt;
====Mean or MTTF====&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T},\,\!&amp;lt;/math&amp;gt; or Mean Time To Failure (MTTF) for the Eyring-exponential is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \overline{T}= &amp;amp; \int_{0}^{\infty }t\cdot f(t,V)dt=\int_{0}^{\infty }t\cdot V{{e}^{\left( A-\tfrac{B}{V} \right)}}{{e}^{-tV{{e}^{\left( A-\tfrac{B}{V} \right)}}}}dt =\   \frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Median====&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Eyring-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=0.693\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Mode====&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Eyring-exponential model is &amp;lt;math&amp;gt;\tilde{T}=0.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Standard Deviation====&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, for the Eyring-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Eyring-Exponential Reliability Function====&lt;br /&gt;
The Eyring-exponential reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)={{e}^{-T\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function is the complement of the Eyring-exponential cumulative distribution function or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=1-Q(T,V)=1-\int_{0}^{T}f(T,V)dT\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=1-\int_{0}^{T}V{{e}^{\left( A-\tfrac{B}{V} \right)}}{{e}^{-T\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}}}dT={{e}^{-T\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Conditional Reliability====&lt;br /&gt;
The conditional reliability function for the Eyring-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-\lambda (T+t)}}}{{{e}^{-\lambda T}}}={{e}^{-t\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Reliable Life====&lt;br /&gt;
For the Eyring-exponential model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R,}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({{t}_{R}},V)={{e}^{-{{t}_{R}}\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln [R({{t}_{R}},V)]=-{{t}_{R}}\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=-\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\ln [R({{t}_{R}},V)]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
====Maximum Likelihood Estimation Method====&lt;br /&gt;
The complete exponential log-likelihood function of the Eyring model is composed of two summation portions:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ {{V}_{i}}\cdot {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}{{e}^{-{{V}_{i}}\cdot {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}\cdot {{T}_{i}}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\cdot {{V}_{i}}\cdot {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}\cdot T_{i}^{\prime }+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-T_{Li}^{\prime \prime }{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-T_{Ri}^{\prime \prime }{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the Eyring parameter (unknown, the first of two parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the second Eyring parameter (unknown, the second of two parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;\widehat{A}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{B}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial A}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0\,\!&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial A}= &amp;amp; \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left( 1-{{V}_{i}}\cdot {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}{{T}_{i}} \right)-\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{V}_{i}}\cdot {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}T_{i}^{\prime } \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\left( T_{Li}^{\prime \prime }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime }R_{Ri}^{\prime \prime } \right){{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}}}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial B}= &amp;amp; \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left[ {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}{{T}_{i}}-\frac{1}{{{V}_{i}}} \right]+\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\cdot {{e}^{\left( A-\tfrac{B}{{{V}_{i}}} \right)}}T_{i}^{\prime } \overset{FI}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\left( T_{Li}^{\prime \prime }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime }R_{Ri}^{\prime \prime } \right){{e}^{A-\tfrac{B}{{{V}_{i}}}}}}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Eyring-Weibull=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Eyring Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; for 2-parameter Weibull distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scale parameter (or characteristic life) of the Weibull distribution is &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;. The Eyring-Weibull model &#039;&#039;pdf&#039;&#039; can then be obtained by setting &amp;lt;math&amp;gt;\eta =L(V)\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\eta =L(V)=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta }=V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting for &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; into the Weibull &#039;&#039;pdf&#039;&#039; yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=\beta \cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}}{{\left( t\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta -1}}{{e}^{-{{\left( t\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Eyring-Weibull Statistical Properties Summary==&lt;br /&gt;
====Mean or MTTF====&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T}\,\!&amp;lt;/math&amp;gt;, or Mean Time To Failure (MTTF) for the Eyring-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\cdot \Gamma \left( \frac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Gamma \left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt; is the gamma function evaluated at the value of &amp;lt;math&amp;gt;\left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
====Median====&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T}\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Eyring-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}{{\left( \ln 2 \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Mode====&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Eyring-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}{{\left( 1-\frac{1}{\beta } \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Standard Deviation====&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}},\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Eyring-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}\cdot \sqrt{\Gamma \left( \frac{2}{\beta }+1 \right)-{{\left( \Gamma \left( \frac{1}{\beta }+1 \right) \right)}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Eyring-Weibull Reliability Function====&lt;br /&gt;
The Eyring-Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)={{e}^{-{{\left( V\cdot T\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Conditional Reliability Function====&lt;br /&gt;
The Eyring-Weibull conditional reliability function at a specified stress level is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,t,V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-{{\left( \left( T+t \right)\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta }}}}}{{{e}^{-{{\left( V\cdot T\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,t,V)={{e}^{-\left[ {{\left( \left( T+t \right)\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta }}-{{\left( V\cdot T\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta }} \right]}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Reliable Life====&lt;br /&gt;
For the Eyring-Weibull model, the reliable life, &amp;lt;math&amp;gt;{{t}_{R}}\,\!&amp;lt;/math&amp;gt;, of a unit for a specified reliability and starting the mission at age zero is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=\frac{1}{V}{{e}^{-\left( A-\tfrac{B}{V} \right)}}{{\left\{ -\ln \left[ R\left( {{T}_{R}},V \right) \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Eyring-Weibull Failure Rate Function====&lt;br /&gt;
The Eyring-Weibull failure rate function, &amp;lt;math&amp;gt;\lambda (T)\,\!&amp;lt;/math&amp;gt;, is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda \left( T,V \right)=\frac{f\left( T,V \right)}{R\left( T,V \right)}=\beta {{\left( T\cdot V\cdot {{e}^{\left( A-\tfrac{B}{V} \right)}} \right)}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
====Maximum Likelihood Estimation Method====&lt;br /&gt;
The Eyring-Weibull log-likelihood function is composed of two summation portions:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \beta \cdot {{V}_{i}}\cdot {{e}^{A-\tfrac{B}{{{V}_{i}}}}}{{\left( {{T}_{i}}{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta -1}}{{e}^{-{{\left( {{T}_{i}}{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( {{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}}T_{i}^{\prime } \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-{{\left( T_{Li}^{\prime \prime }{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-{{\left( T_{Ri}^{\prime \prime }{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the Eyring parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the second Eyring parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \beta }=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial A}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
&amp;amp; \frac{\partial \Lambda }{\partial A}= &amp;amp; \beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}-\beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}{{\left( {{T}_{i}}{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }} -\beta \underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }{{\left( T_{i}^{\prime }{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }} \overset{FI}{\mathop{-\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\beta V_{i}^{\beta }{{e}^{A\beta -\tfrac{B\beta }{{{V}_{i}}}}}\left[ {{(T_{Li}^{\prime \prime })}^{\beta }}R_{Li}^{\prime \prime }-{{(T_{Ri}^{\prime \prime })}^{\beta }}R_{Ri}^{\prime \prime } \right]}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial B}= &amp;amp; -\beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\frac{1}{{{V}_{i}}}+\beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\frac{1}{{{V}_{i}}}{{\left( {{T}_{i}}{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }} +\beta \underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }\frac{1}{{{V}_{i}}}{{\left( T_{i}^{\prime }{{V}_{i}}{{e}^{A-\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }} +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\beta V_{i}^{(\beta -1)}{{e}^{A\beta -\tfrac{B\beta }{{{V}_{i}}}}}\left[ {{(T_{Li}^{\prime \prime })}^{\beta }}R_{Li}^{\prime \prime }-{{(T_{Ri}^{\prime \prime })}^{\beta }}R_{Ri}^{\prime \prime } \right]}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\frac{\partial \Lambda}{\partial \beta}= &amp;amp; \frac{1}{\beta}\sum_{i=1}^{F_e} N_i\frac{1}{V_i}+\sum_{i=1}^{F_e} N_i ln\left(T_iV_i e^{A-\tfrac{B}{V_i}}\right)&lt;br /&gt;
-\sum_{i=1}^{F_e} N_i\left(T_iV_i e^{A-\tfrac{B}{V_i}}\right)^\beta ln\left(T_iV)i e^{A-\tfrac{B}{V_i}}\right)\\&lt;br /&gt;
&amp;amp; -\sum_{i=1}^S N_i^&#039;\left(T_i^&#039;V_I e^{A-\tfrac{B}{V_i}}\right)^\beta ln\left(T_iV)i e^{A-\tfrac{B}{V_i}}\right)&lt;br /&gt;
-\sum_{i=1}^{FI} N_i^{&#039;&#039;}V_i e^{A-\tfrac{B}{V_i}}\frac{R_{Li}^{&#039;&#039;} T_{Li}^{&#039;&#039;}\left(ln(T_{Li}^&#039; V_i)+A-\tfrac{B}{V_i}\right)-R_{Ri}^{&#039;&#039;} T_{Ri}^{&#039;&#039;}\left(ln(T_{Ri}^{&#039;&#039;} V_i)+A-\tfrac{B}{V_i}\right)}{R_{L_i}^{&#039;&#039;}-F_{Ri}^{&#039;&#039;}}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Eyring-Weibull Example===&lt;br /&gt;
{{:Eyring_Example}}&lt;br /&gt;
&lt;br /&gt;
=Eyring-Lognormal=&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the lognormal distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\overline{{{T}&#039;}}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{T}&#039;=\ln (T) &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
T =\text{times-to-failure}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
*&amp;lt;math&amp;gt;\overline{{{T}&#039;}}=\,\!&amp;lt;/math&amp;gt; mean of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\,\!&amp;lt;/math&amp;gt; standard deviation of the natural logarithms of the times-to-failure. &lt;br /&gt;
&lt;br /&gt;
The Eyring-lognormal model can be obtained first by setting &amp;lt;math&amp;gt;\breve{T}=L(V)\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=L(V)=\frac{1}{V}{{e}^{-(A-\tfrac{B}{V})}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{e}^{{{\overline{T}}^{\prime }}}}=\frac{1}{V}{{e}^{-(A-\tfrac{B}{V})}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\overline{T}}^{\prime }}=-\ln (V)-A+\frac{B}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting this into the lognormal &#039;&#039;pdf&#039;&#039; yields the Eyring-lognormal model &#039;&#039;pdf&#039;&#039;:  &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T,V)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;+\ln (V)+A-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Eyring-Lognormal Statistical Properties Summary==&lt;br /&gt;
&lt;br /&gt;
====The Mean====&lt;br /&gt;
The mean life of the Eyring-lognormal model (mean of the times-to-failure), &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt;, is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
	   \bar{T}=\ {{e}^{\bar{{T}&#039;}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}} =\  {{e}^{-\ln (V)-A+\tfrac{B}{V}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
	\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
The mean of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\bar{T}}^{^{\prime }}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{T}}^{\prime }}=\ln \left( {\bar{T}} \right)-\frac{1}{2}\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Median====&lt;br /&gt;
The median of the Eyring-lognormal model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}={{e}^{{{\overline{T}}^{\prime }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Standard Deviation====&lt;br /&gt;
The standard deviation of the Eyring-lognormal model (standard deviation of the times-to-failure), &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma }_{T}}= &amp;amp; \sqrt{\left( {{e}^{2\bar{{T}&#039;}+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)} =\  \sqrt{\left( {{e}^{2\left( -\ln (V)-A+\tfrac{B}{V} \right)+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The standard deviation of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\sqrt{\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Mode====&lt;br /&gt;
The mode of the Eyring-lognormal model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \tilde{T}= &amp;amp; {{e}^{{{\overline{T}}^{\prime }}-\sigma _{{{T}&#039;}}^{2}}} =\  {{e}^{-\ln (V)-A+\tfrac{B}{V}-\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Eyring-Lognormal Reliability Function====&lt;br /&gt;
&lt;br /&gt;
The reliability for a mission of time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, starting at age 0, for the Eyring-lognormal model is determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=\int_{T}^{\infty }f(t,V)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=\int_{{{T}^{^{\prime }}}}^{\infty }\frac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t+\ln (V)+A-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no closed form solution for the lognormal reliability function. Solutions can be obtained via the use of standard normal tables. Since the application automatically solves for the reliability we will not discuss manual solution methods.&lt;br /&gt;
&lt;br /&gt;
====Reliable Life====&lt;br /&gt;
For the Eyring-lognormal model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;T_{R}^{\prime }=-\ln (V)-A+\frac{B}{V}+z\cdot {{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z={{\Phi }^{-1}}\left[ F\left( T_{R}^{\prime },V \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;,V)}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;{T}&#039;=\ln (T)\,\!&amp;lt;/math&amp;gt; the reliable life, &amp;lt;math&amp;gt;{{t}_{R,}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}={{e}^{T_{R}^{\prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Eyring-Lognormal Failure Rate====&lt;br /&gt;
The Eyring-lognormal failure rate is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (T,V)=\frac{f(T,V)}{R(T,V)}=\frac{\tfrac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;+\ln (V)+A-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}}{\int_{{{T}&#039;}}^{\infty }\tfrac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;+\ln (V)+A-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
====Maximum Likelihood Estimation Method====&lt;br /&gt;
The complete Eyring-lognormal log-likelihood function is composed of two summation portions:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{{{\sigma }_{{{T}&#039;}}}{{T}_{i}}}\phi \left( \frac{\ln \left( {{T}_{i}} \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] \text{ }+\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\ln \left[ 1-\Phi \left( \frac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Li}^{\prime \prime }=\frac{\ln T_{Li}^{\prime \prime }+\ln {{V}_{i}}+A-\tfrac{B}{{{V}_{i}}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Ri}^{\prime \prime }=\frac{\ln T_{Ri}^{\prime \prime }+\ln {{V}_{i}}+A-\tfrac{B}{{{V}_{i}}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; is the standard deviation of the natural logarithm of the times-to-failure (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the Eyring parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the second Eyring parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial A}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial A}= &amp;amp; -\frac{1}{\sigma _{{{T}&#039;}}^{2}}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}(\ln ({{T}_{i}})+\ln ({{V}_{i}})+A-\frac{B}{{{V}_{i}}}) -\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{\phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{+\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\varphi (z_{Ri}^{\prime \prime })-\varphi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial B}=  \frac{1}{\sigma _{{{T}&#039;}}^{2}}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\frac{1}{{{V}_{i}}}(\ln ({{T}_{i}})+\ln ({{V}_{i}})+A-\frac{B}{{{V}_{i}}}) +\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{1}{{{V}_{i}}}\frac{\phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\varphi (z_{Ri}^{\prime \prime })-\varphi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }{{V}_{i}}(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))} \\ &lt;br /&gt;
 &amp;amp; \frac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}= \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left( \frac{{{\left( \ln ({{T}_{i}})+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}} \right)}^{2}}}{\sigma _{{{T}&#039;}}^{3}}-\frac{1}{{{\sigma }_{{{T}&#039;}}}} \right) +\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{\left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)\phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)+\ln ({{V}_{i}})+A-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{z_{Ri}^{\prime \prime }\varphi (z_{Ri}^{\prime \prime })-z_{Li}^{\prime \prime }\varphi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\phi \left( x \right)=\frac{1}{\sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( x \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generalized Eyring Relationship=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Generalized_Eyring_Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{:Generalized Eyring Relationship}}&lt;br /&gt;
&lt;br /&gt;
=Eyring Confidence Bounds=&lt;br /&gt;
==Approximate Confidence Bounds for the Eyring-Exponential==&lt;br /&gt;
===Confidence Bounds on Mean Life===&lt;br /&gt;
The mean life for the Eyring relationship is given by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt;. The upper &amp;lt;math&amp;gt;({{m}_{U}})\,\!&amp;lt;/math&amp;gt; and lower &amp;lt;math&amp;gt;({{m}_{L}})\,\!&amp;lt;/math&amp;gt; bounds on the mean life (ML estimate of the mean life) are estimated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{U}}=\widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{m}_{L}}=\widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level, then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds, and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds. The variance of &amp;lt;math&amp;gt;\widehat{m}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{m})= &amp;amp; {{\left( \frac{\partial m}{\partial A} \right)}^{2}}Var(\widehat{A})+{{\left( \frac{\partial m}{\partial B} \right)}^{2}}Var(\widehat{B}) +2\left( \frac{\partial m}{\partial A} \right)\left( \frac{\partial m}{\partial B} \right)Cov(\widehat{A},\widehat{B})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var(\widehat{m})=\frac{1}{{{V}^{2}}}{{e}^{-2\left( \widehat{A}-\tfrac{\widehat{B}}{V} \right)}}\left[ Var(\widehat{A})+\frac{1}{{{V}^{2}}}Var(\widehat{B})-\frac{1}{V}Cov(\widehat{A},\widehat{B}) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariance of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{A}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{B})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{A}) &amp;amp; Cov(\widehat{A},\widehat{B})  \\&lt;br /&gt;
   Cov(\widehat{B},\widehat{A}) &amp;amp; Var(\widehat{B})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial B}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The bounds on reliability at a given time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, are estimated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{U}}}}} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{L}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{T}=-\widehat{m}\cdot \ln (R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding confidence bounds are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; -{{m}_{U}}\cdot \ln (R) \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; -{{m}_{L}}\cdot \ln (R)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the Eyring-Weibull==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
From the asymptotically normal property of the maximum likelihood estimators, and since &amp;lt;math&amp;gt;\widehat{\beta }\,\!&amp;lt;/math&amp;gt; is a positive parameter, &amp;lt;math&amp;gt;\ln (\widehat{\beta })\,\!&amp;lt;/math&amp;gt; can then be treated as normally distributed. After performing this transformation, the bounds on the parameters are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\beta }_{U}}= &amp;amp; \widehat{\beta }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{\beta }_{L}}= &amp;amp; \widehat{\beta }\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
also:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{A}_{U}}= &amp;amp; \widehat{A}+{{K}_{\alpha }}\sqrt{Var(\widehat{A})} \\ &lt;br /&gt;
 &amp;amp; {{A}_{L}}= &amp;amp; \widehat{A}-{{K}_{\alpha }}\sqrt{Var(\widehat{A})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{B}_{U}}= &amp;amp; \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})} \\ &lt;br /&gt;
 &amp;amp; {{B}_{L}}= &amp;amp; \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{B})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; are estimated from the Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{\beta }) &amp;amp; Cov(\widehat{\beta },\widehat{A}) &amp;amp; Cov(\widehat{\beta },\widehat{B})  \\&lt;br /&gt;
   Cov(\widehat{A},\widehat{\beta }) &amp;amp; Var(\widehat{A}) &amp;amp; Cov(\widehat{A},\widehat{B})  \\&lt;br /&gt;
   Cov(\widehat{B},\widehat{\beta }) &amp;amp; Cov(\widehat{B},\widehat{A}) &amp;amp; Var(\widehat{B})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\beta }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial B}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial B}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Reliability===&lt;br /&gt;
The reliability function for the Eyring-Weibull model (ML estimate) is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-{{\left( T\cdot V\cdot {{e}^{\left( \widehat{A}-\tfrac{\widehat{B}}{V} \right)}} \right)}^{\widehat{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-{{e}^{\ln \left[ {{\left( T\cdot V\cdot {{e}^{\left( \widehat{A}-\tfrac{\widehat{B}}{V} \right)}} \right)}^{\widehat{\beta }}} \right]}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Setting:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\ln \left[ {{\left( T\cdot V\cdot {{e}^{\left( \widehat{A}-\tfrac{\widehat{B}}{V} \right)}} \right)}^{\widehat{\beta }}} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\widehat{\beta }\left[ \ln (T)+\ln (V)+\widehat{A}-\frac{\widehat{B}}{V} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-e\widehat{^{u}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to find the upper and lower bounds on &amp;lt;math&amp;gt;\widehat{u}\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial A} \right)}^{2}}Var(\widehat{A}) +{{\left( \frac{\partial \widehat{u}}{\partial B} \right)}^{2}}Var(\widehat{B}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial A} \right)Cov(\widehat{\beta },\widehat{A}) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial B} \right)Cov(\widehat{\beta },\widehat{B}) +2\left( \frac{\partial \widehat{u}}{\partial A} \right)\left( \frac{\partial \widehat{u}}{\partial B} \right)Cov(\widehat{A},\widehat{B})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{u})= &amp;amp; {{\left( \frac{\widehat{u}}{\widehat{\beta }} \right)}^{2}}Var(\widehat{\beta })+{{\widehat{\beta }}^{2}}Var(\widehat{A}) +{{\left( \frac{\widehat{\beta }}{V} \right)}^{2}}Var(\widehat{B}) +2\widehat{u}\cdot Cov(\widehat{\beta },\widehat{A})-\frac{2\widehat{u}}{V}Cov(\widehat{\beta },\widehat{B}) -\frac{2{{\widehat{\beta }}^{2}}}{V}Cov(\widehat{A},\widehat{B})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{L}} \right)}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; {{e}^{-{{e}^{\left( {{u}_{U}} \right)}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (R)&amp;amp;=\  -{{\left( \widehat{T}\cdot V\cdot {{e}^{\left( \widehat{A}-\tfrac{\widehat{B}}{V} \right)}} \right)}^{\widehat{\beta }}} \\ &lt;br /&gt;
  \ln (-\ln (R))&amp;amp;=\  \widehat{\beta }\left( \ln \widehat{T}+\ln V+\widehat{A}-\frac{\widehat{B}}{V} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\frac{1}{\widehat{\beta }}\ln (-\ln (R))-\ln V-\widehat{A}+\frac{\widehat{B}}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\widehat{u}=ln(\widehat{T})\,\!&amp;lt;/math&amp;gt;. The upper and lower bounds on &amp;lt;math&amp;gt;\widehat{u}\,\!&amp;lt;/math&amp;gt; are then estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial A} \right)}^{2}}Var(\widehat{A}) +{{\left( \frac{\partial \widehat{u}}{\partial B} \right)}^{2}}Var(\widehat{B}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial A} \right)Cov(\widehat{\beta },\widehat{A}) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial B} \right)Cov(\widehat{\beta },\widehat{B}) +2\left( \frac{\partial \widehat{u}}{\partial A} \right)\left( \frac{\partial \widehat{u}}{\partial B} \right)Cov(\widehat{A},\widehat{B})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{u})= \frac{1}{{{\widehat{\beta }}^{4}}}{{\left[ \ln (-\ln (R)) \right]}^{2}}Var(\widehat{\beta }) +Var(\widehat{A})+\frac{1}{{{V}^{2}}}Var(\widehat{B}) +\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}}Cov(\widehat{\beta },\widehat{A})-\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}V}Cov(\widehat{\beta },\widehat{B}) -\frac{2}{V}Cov(\widehat{A},\widehat{B})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on time are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{{{u}_{U}}}} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{{{u}_{L}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the Eyring-Lognormal==&lt;br /&gt;
===Bounds on the Parameters===&lt;br /&gt;
The lower and upper bounds on &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{A}_{U}}= &amp;amp; \widehat{A}+{{K}_{\alpha }}\sqrt{Var(\widehat{A})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{A}_{L}}= &amp;amp; \widehat{A}-{{K}_{\alpha }}\sqrt{Var(\widehat{A})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{B}_{U}}= &amp;amp; \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{B}_{L}}= &amp;amp; \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{B})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the standard deviation, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{T}&#039;,}}\,\!&amp;lt;/math&amp;gt; is a positive parameter, &amp;lt;math&amp;gt;\ln ({{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; is treated as normally distributed, and the bounds are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {{\sigma }_{U}} &amp;amp;=\  {{\widehat{\sigma }}_{{{T}&#039;}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}\text{ (Upper bound)} \\ &lt;br /&gt;
 {{\sigma }_{L}} &amp;amp;=\  \frac{{{\widehat{\sigma }}_{{{T}&#039;}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;A,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{A},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left( \begin{matrix}&lt;br /&gt;
   Var\left( {{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{A} \right) &amp;amp; Var\left( \widehat{A} \right) &amp;amp; Cov\left( \widehat{A},\widehat{B} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{B} \right) &amp;amp; Cov\left( \widehat{B},\widehat{A} \right) &amp;amp; Var\left( \widehat{B} \right)  \\&lt;br /&gt;
\end{matrix} \right)={{[F]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F=\left( \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma _{{{T}&#039;}}^{2}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial B}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial B}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial A} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bounds on Reliability===&lt;br /&gt;
The reliability of the lognormal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({T}&#039;,V;A,B,{{\sigma }_{{{T}&#039;}}})=\int_{{{T}&#039;}}^{\infty }\frac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t+\ln (V)+\widehat{A}-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let  &amp;lt;math&amp;gt;\widehat{z}(t,V;A,B,{{\sigma }_{T}})=\tfrac{t+\ln (V)+\widehat{A}-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}},\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;\tfrac{d\widehat{z}}{dt}=\tfrac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;t={T}&#039;\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{z}=\tfrac{{T}&#039;+\ln (V)+\widehat{A}-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}\,\!&amp;lt;/math&amp;gt;, and for &amp;lt;math&amp;gt;t=\infty ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{z}=\infty .\,\!&amp;lt;/math&amp;gt; The above equation then becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(\widehat{z})=\int_{\widehat{z}({T}&#039;,V)}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The bounds on &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{z}_{U}}= &amp;amp; \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ &lt;br /&gt;
 &amp;amp; {{z}_{L}}= &amp;amp; \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{z})= &amp;amp; \left( \frac{\partial \widehat{z}}{\partial A} \right)_{\widehat{A}}^{2}Var(\widehat{A})+\left( \frac{\partial \widehat{z}}{\partial B} \right)_{\widehat{B}}^{2}Var(\widehat{B})+\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)_{{{\widehat{\sigma }}_{{{T}&#039;}}}}^{2}Var({{\widehat{\sigma }}_{T}}) +2{{\left( \frac{\partial \widehat{z}}{\partial A} \right)}_{\widehat{A}}}{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}Cov\left( \widehat{A},\widehat{B} \right) \\ &lt;br /&gt;
 &amp;amp; +2{{\left( \frac{\partial \widehat{z}}{\partial A} \right)}_{\widehat{A}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{A},{{\widehat{\sigma }}_{T}} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{B},{{\widehat{\sigma }}_{T}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{z})=  \frac{1}{\widehat{\sigma }_{{{T}&#039;}}^{2}}[Var(\widehat{A})+\frac{1}{{{V}^{2}}}Var(\widehat{B})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) -\frac{2}{V}Cov\left( \widehat{A},\widehat{B} \right)-2\widehat{z}Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)+\frac{2\widehat{z}}{V}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds on Time===&lt;br /&gt;
The bounds around time for a given lognormal percentile (unreliability) are estimated by first solving the reliability equation with respect to time as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{T}&#039;(V;\widehat{A},\widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}})=-\ln (V)-\widehat{A}+\frac{\widehat{B}}{V}+z\cdot {{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {T}&#039;(V;\widehat{A},\widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}}) &amp;amp;=\  \ln (T) \\ &lt;br /&gt;
  z &amp;amp;=\  {{\Phi }^{-1}}\left[ F({T}&#039;) \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;)}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to calculate the variance of &amp;lt;math&amp;gt;{T}&#039;(V;\widehat{A},\widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}}):\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var({T}&#039;)= &amp;amp; {{\left( \frac{\partial {T}&#039;}{\partial A} \right)}^{2}}Var(\widehat{A})+{{\left( \frac{\partial {T}&#039;}{\partial B} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2\left( \frac{\partial {T}&#039;}{\partial A} \right)\left( \frac{\partial {T}&#039;}{\partial B} \right)Cov\left( \widehat{A},\widehat{B} \right) \\ &lt;br /&gt;
 &amp;amp; +2\left( \frac{\partial {T}&#039;}{\partial A} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2\left( \frac{\partial {T}&#039;}{\partial B} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var({T}&#039;)= Var(\widehat{A})+\frac{1}{V}Var(\widehat{B})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) -\frac{2}{V}Cov\left( \widehat{A},\widehat{B} \right) -2\widehat{z}Cov\left( \widehat{A},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +\frac{2\widehat{z}}{V}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; T_{U}^{\prime }= &amp;amp; \ln {{T}_{U}}={T}&#039;+{{K}_{\alpha }}\sqrt{Var({T}&#039;)} \\ &lt;br /&gt;
 &amp;amp; T_{L}^{\prime }= &amp;amp; \ln {{T}_{L}}={T}&#039;-{{K}_{\alpha }}\sqrt{Var({T}&#039;)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for &amp;lt;math&amp;gt;{{T}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{T}_{L}}\,\!&amp;lt;/math&amp;gt; yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{T_{U}^{\prime }}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{T_{L}^{\prime }}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Arrhenius_Relationship&amp;diff=64926</id>
		<title>Arrhenius Relationship</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Arrhenius_Relationship&amp;diff=64926"/>
		<updated>2017-02-08T21:03:48Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional Reliability Function */ changed R(t,T,V) to R((t|T),V)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|4}}&lt;br /&gt;
The Arrhenius life-stress model (or relationship) is probably the most common life-stress relationship utilized in accelerated life testing. It has been widely used when the stimulus or acceleration variable (or stress) is thermal (i.e., temperature). It is derived from the Arrhenius reaction rate equation proposed by the Swedish physical chemist Svandte Arrhenius in 1887. &lt;br /&gt;
&lt;br /&gt;
===Formulation===&lt;br /&gt;
The Arrhenius reaction rate equation is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T)=A{{e}^{-\tfrac{{{E}_{a}}}{k\cdot T}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the speed of reaction.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is an unknown nonthermal constant.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{E}_{a}}\,\!&amp;lt;/math&amp;gt; is the activation energy &amp;lt;math&amp;gt;(\text{eV})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; is the Boltzmann&#039;s constant &amp;lt;math&amp;gt;(8.617385\times {{10}^{-5}}\text{eV}{{\text{K}}^{-1}})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is the absolute temperature &amp;lt;math&amp;gt;(\text{K})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The activation energy is the energy that a molecule must have to participate in the reaction. In other words, the activation energy is a measure of the effect that temperature has on the reaction.&lt;br /&gt;
&lt;br /&gt;
The Arrhenius life-stress model is formulated by assuming that life is proportional to the inverse reaction rate of the process, thus the Arrhenius life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; represents a quantifiable life measure, such as mean life, characteristic life, median life, or &amp;lt;math&amp;gt;B(x)\,\!&amp;lt;/math&amp;gt; life, etc.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; represents the stress level (formulated for temperature and &#039;&#039;&#039;temperature values in absolute units, degrees Kelvin or degrees Rankine&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is one of the model parameters to be determined, &amp;lt;math&amp;gt;(C&amp;gt;0)\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is another model parameter to be determined.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.1.png|center|400px|Graphical look at the Arrhenius life-stress relationship (linear scale) for a different life characteristics, assuming a Weibull distribution.]]&lt;br /&gt;
&lt;br /&gt;
Since the Arrhenius is a physics-based model derived for temperature dependence, it is used for temperature accelerated tests. For the same reason, temperature values must be in absolute units (Kelvin or Rankine), even though the Arrhenius equation is unitless.&lt;br /&gt;
&lt;br /&gt;
===Life Stress Plots===&lt;br /&gt;
The Arrhenius relationship can be linearized and plotted on a Life vs. Stress plot, also called the Arrhenius plot. The relationship is linearized by taking the natural logarithm of both sides in the Arrhenius equation or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;ln(L(V))=ln(C)+\frac{B}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.2.png|center|400px|Arrhenius plot for Weibull life distribution.]]&lt;br /&gt;
&lt;br /&gt;
In the linearized Arrhenius equation, &amp;lt;math&amp;gt;\ln (C)\,\!&amp;lt;/math&amp;gt; is the intercept of the line and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the slope of the line. Note that the inverse of the stress, and not the stress, is the variable. In the above figure, life is plotted versus stress and not versus the inverse stress. This is because the linearized Arrhenius equation was plotted on a reciprocal scale. On such a scale, the slope &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; appears to be negative even though it has a positive value. This is because &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is actually the slope of the reciprocal of the stress and not the slope of the stress. The reciprocal of the stress is decreasing as stress is increasing ( &amp;lt;math&amp;gt;\tfrac{1}{V}\,\!&amp;lt;/math&amp;gt; is decreasing as &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; is increasing). The two different axes are shown in the next figure.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.3.png|center|400px|An illustration of both reciprocal and non-reciprocal scales.]]&lt;br /&gt;
&lt;br /&gt;
The Arrhenius relationship is plotted on a reciprocal scale for practical reasons. For example, in the above figure it is more convenient to locate the life corresponding to a stress level of 370K than to take the reciprocal of 370K (0.0027) first, and then locate the corresponding life.&lt;br /&gt;
The shaded areas shown in the above figure are the imposed at each test stress level. From such imposed  &#039;&#039;pdfs&#039;&#039;  one can see the range of the life at each test stress level, as well as the scatter in life. The next figure illustrates a case in which there is a significant scatter in life at each of the test stress levels.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.4.png|center|400px|An example of scatter in life at each test stress level.]]&lt;br /&gt;
&lt;br /&gt;
===Activation Energy and the Parameter &#039;&#039;B&#039;&#039; ===&lt;br /&gt;
Depending on the application (and where the stress is exclusively thermal), the parameter &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be replaced by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;B=\frac{{{E}_{a}}}{k}=\frac{\text{activation energy}}{\text{Boltzman}{{\text{n}}^{\prime }}\text{s constant}}=\frac{\text{activation energy}}{8.617385\times {{10}^{-5}}\text{eV}{{\text{K}}^{-1}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that in this formulation, the activation energy &amp;lt;math&amp;gt;{{E}_{a}}\,\!&amp;lt;/math&amp;gt; must be known a priori. If the activation energy is known then there is only one model parameter remaining, &amp;lt;math&amp;gt;C.\,\!&amp;lt;/math&amp;gt; Because in most real life situations this is rarely the case, all subsequent formulations will assume that this activation energy is unknown and treat &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; as one of the model parameters. &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has the same properties as the activation energy. In other words, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is a measure of the effect that the stress (i.e. temperature) has on the life. The larger the value of &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; the higher the dependency of the life on the specific stress. Parameter &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; may also take negative values. In that case, life is increasing with increasing stress. An example of this would be plasma filled bulbs, where low temperature is a higher stress on the bulbs than high temperature.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.5.png|center|400px|Behavior of the parameter &#039;&#039;B&#039;&#039;.]]&lt;br /&gt;
&lt;br /&gt;
===Acceleration Factor===&lt;br /&gt;
Most practitioners use the term acceleration factor to refer to the ratio of the life (or acceleration characteristic) between the use level and a higher test stress level or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{F}}=\frac{{{L}_{USE}}}{{{L}_{Accelerated}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Arrhenius model this factor is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{F}}=\frac{{{L}_{USE}}}{{{L}_{Accelerated}}}=\frac{C\text{ }{{e}^{\tfrac{B}{{{V}_{u}}}}}}{C\text{ }{{e}^{\tfrac{B}{{{V}_{A}}}}}}=\frac{\text{ }{{e}^{\tfrac{B}{{{V}_{u}}}}}}{\text{ }{{e}^{\tfrac{B}{{{V}_{A}}}}}}={{e}^{\left( \tfrac{B}{{{V}_{u}}}-\tfrac{B}{{{V}_{A}}} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is assumed to be known a priori (using an activation energy), the assumed activation energy alone dictates this acceleration factor!&lt;br /&gt;
&lt;br /&gt;
=Arrhenius-Exponential=&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the 1-parameter exponential distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
f(t)=\lambda {{e}^{-\lambda t}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be easily shown that the mean life for the 1-parameter exponential distribution (presented in detail [[Distributions used in Accelerated Testing#The Exponential Distribution|here]]) is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda =\frac{1}{m}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
thus:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{1}{m}{{e}^{-\tfrac{t}{m}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Arrhenius-exponential model &#039;&#039;pdf&#039;&#039; can then be obtained by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
Therefore:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;m=L(V)=C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting for &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; yields a &#039;&#039;pdf&#039;&#039; that is both a function of time and stress or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=\frac{1}{C{{e}^{\tfrac{B}{V}}}}\cdot {{e}^{-\tfrac{1}{C{{e}^{\tfrac{B}{V}}}}\cdot t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Arrhenius-Exponential Statistical Properties Summary==&lt;br /&gt;
====Mean or MTTF====&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T},\,\!&amp;lt;/math&amp;gt; or Mean Time To Failure (MTTF) of the Arrhenius-exponential is given by,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   \overline{T}=\int_{0}^{\infty }t\cdot f(t,V)dt=\int_{0}^{\infty }t\cdot \frac{1}{C{{e}^{\tfrac{B}{V}}}}{{e}^{-\tfrac{t}{C{{e}^{\tfrac{B}{V}}}}}}dt =\  C{{e}^{\tfrac{B}{V}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Median====&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T}\,\!&amp;lt;/math&amp;gt; of the Arrhenius-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=0.693\cdot C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Mode====&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; of the Arrhenius-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Standard Deviation====&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, of the Arrhenius-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Arrhenius-Exponential Reliability Function====&lt;br /&gt;
The Arrhenius-exponential reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)={{e}^{-\tfrac{T}{C{{e}^{\tfrac{B}{V}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function is the complement of the Arrhenius-exponential cumulative distribution function or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=1-Q(T,V)=1-\int_{0}^{T}f(T,V)dT\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=1-\int_{0}^{T}\frac{1}{C{{e}^{\tfrac{B}{V}}}}{{e}^{-\tfrac{T}{C{{e}^{\tfrac{B}{V}}}}}}dT={{e}^{-\tfrac{T}{C{{e}^{\tfrac{B}{V}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Conditional Reliability====&lt;br /&gt;
The Arrhenius-exponential conditional reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-\lambda (T+t)}}}{{{e}^{-\lambda T}}}={{e}^{-\tfrac{t}{C{{e}^{\tfrac{B}{V}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Reliable Life====&lt;br /&gt;
For the Arrhenius-exponential model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({{t}_{R}},V)={{e}^{-\tfrac{{{t}_{R}}}{C{{e}^{\tfrac{B}{V}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln [R({{t}_{R}},V)]=-\frac{{{t}_{R}}}{C{{e}^{\tfrac{B}{V}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=-C{{e}^{\tfrac{B}{V}}}\ln [R({{t}_{R}},V)]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
====Maximum Likelihood Estimation Method====&lt;br /&gt;
The log-likelihood function for the exponential distribution is as shown next:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \lambda {{e}^{-\lambda {{T}_{i}}}} \right]-\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\lambda T_{i}^{\prime } \ &lt;br /&gt;
 &amp;amp; \overset{FI}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-\lambda T_{Li}^{\prime \prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-\lambda T_{Ri}^{\prime \prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is the failure rate parameter (unknown).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
Substituting the Arrhenius-exponential model into the log-likelihood function yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \Lambda = \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}}{{e}^{-\tfrac{1}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}}{{T}_{i}}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{1}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}}T_{i}^{\prime }+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-\tfrac{T_{Li}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-\tfrac{T_{Ri}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial C}=0\,\!&amp;lt;/math&amp;gt;, where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial B}= &amp;amp; \frac{1}{C}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left( \frac{{{T}_{i}}}{{{V}_{i}}{{e}^{\tfrac{B}{{{V}_{i}}}}}}-\frac{C}{{{V}_{i}}} \right)+\frac{1}{C}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{T_{i}^{\prime }}{{{V}_{i}}{{e}^{\tfrac{B}{{{V}_{i}}}}}} \overset{FI}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{T_{Li}^{\prime \prime }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime }R_{Ri}^{\prime \prime }}{(R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime })C{{V}_{i}}{{e}^{\tfrac{B}{{{V}_{i}}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial C}= &amp;amp; \frac{1}{C}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left( \frac{{{T}_{i}}}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}}-1 \right)+\frac{1}{{{C}^{2}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{T_{i}^{\prime }}{{{e}^{\tfrac{B}{{{V}_{i}}}}}} \overset{FI}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{T_{Li}^{\prime \prime }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime }R_{Ri}^{\prime \prime }}{(R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }){{C}^{2}}{{e}^{\tfrac{B}{{{V}_{i}}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Arrhenius-Weibull=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Arrhenius Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; for the 2-parameter Weibull distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scale parameter (or characteristic life) of the Weibull distribution is &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The Arrhenius-Weibull model &#039;&#039;pdf&#039;&#039; can then be obtained by setting &amp;lt;math&amp;gt;\eta =L(V)\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\eta =L(V)=C\cdot {{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and substituting for &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; in the 2-parameter Weibull distribution equation:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=\frac{\beta }{C\cdot {{e}^{\tfrac{B}{V}}}}{{\left( \frac{t}{C\cdot {{e}^{\tfrac{B}{V}}}} \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{t}{C\cdot {{e}^{\tfrac{B}{V}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An illustration of the &#039;&#039;pdf&#039;&#039;  for different stresses is shown in the next figure.  As expected, the &#039;&#039;pdf&#039;&#039; at lower stress levels is more stretched to the right, with a higher scale parameter, while its shape remains the same (the shape parameter is approximately 3). This behavior is observed when the parameter &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; of the Arrhenius model is positive.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.6.png|center|400px|Behavior of the probability density function at different stresses and with the parameters held constant.]]&lt;br /&gt;
&lt;br /&gt;
The advantage of using the Weibull distribution as the life distribution lies in its flexibility to assume different shapes. The Weibull distribution is presented in greater detail in [[The Weibull Distribution]].&lt;br /&gt;
&lt;br /&gt;
==Arrhenius-Weibull Statistical Properties Summary==&lt;br /&gt;
====Mean or MTTF====&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T}\,\!&amp;lt;/math&amp;gt; (also called &amp;lt;math&amp;gt;MTTF\,\!&amp;lt;/math&amp;gt; by some authors), of the Arrhenius-Weibull relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=C\cdot {{e}^{\tfrac{B}{V}}}\cdot \Gamma \left( \frac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Gamma \left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt; is the gamma function evaluated at the value of &amp;lt;math&amp;gt;\left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Median====&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Arrhenius-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=C\cdot {{e}^{\tfrac{B}{V}}}{{\left( \ln 2 \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Mode====&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Arrhenius-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=C\cdot {{e}^{\tfrac{B}{V}}}{{\left( 1-\frac{1}{\beta } \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Standard Deviation====&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}},\,\!&amp;lt;/math&amp;gt; for the Arrhenius-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=C\cdot {{e}^{\tfrac{B}{V}}}\cdot \sqrt{\Gamma \left( \frac{2}{\beta }+1 \right)-{{\left( \Gamma \left( \frac{1}{\beta }+1 \right) \right)}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Arrhenius-Weibull Reliability Function====&lt;br /&gt;
The Arrhenius-Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)={{e}^{-{{\left( \tfrac{T}{C\cdot {{e}^{\tfrac{B}{V}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the parameter &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is positive, then the reliability increases as stress decreases.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.7.png|center|500px|Behavior of the reliability function at different stress and constant parameter values.]]&lt;br /&gt;
&lt;br /&gt;
The behavior of the reliability function of the Weibull distribution for different values of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was illustrated [[Distributions used in Accelerated Testing#The Weibull Distribution|here]]. In the case of the Arrhenius-Weibull model, however, the reliability is a function of stress also. A 3D plot such as the ones shown in the next figure is now needed to illustrate the effects of both the stress and &amp;lt;math&amp;gt;\beta .\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.8.png|center|800px|Reliability function for &amp;lt;math&amp;gt;\Beta&amp;lt;1 \,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\Beta=1 \,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\Beta&amp;gt;1 \,\!&amp;lt;/math&amp;gt;.]]&lt;br /&gt;
&lt;br /&gt;
====Conditional Reliability Function====&lt;br /&gt;
The Arrhenius-Weibull conditional reliability function at a specified stress level is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-{{\left( \tfrac{T+t}{\eta } \right)}^{\beta }}}}}{{{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V)={{e}^{-\left[ {{\left( \tfrac{T+t}{C\cdot {{e}^{\tfrac{B}{V}}}} \right)}^{\beta }}-{{\left( \tfrac{T}{C\cdot {{e}^{\tfrac{B}{V}}}} \right)}^{\beta }} \right]}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Reliable Life====&lt;br /&gt;
For the Arrhenius-Weibull relationship, the reliable life, &amp;lt;math&amp;gt;{{t}_{R}}\,\!&amp;lt;/math&amp;gt;, of a unit for a specified reliability and starting the mission at age zero is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=C\cdot {{e}^{\tfrac{B}{V}}}{{\left\{ -\ln \left[ R\left( {{t}_{R}},V \right) \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the life for which the unit will function successfully with a reliability of &amp;lt;math&amp;gt;R({{t}_{R}})\,\!&amp;lt;/math&amp;gt;. If &amp;lt;math&amp;gt;R({{t}_{R}})=0.50\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;{{t}_{R}}=\breve{T}\,\!&amp;lt;/math&amp;gt;, the median life, or the life by which half of the units will survive.&lt;br /&gt;
&lt;br /&gt;
====Arrhenius-Weibull Failure Rate Function====&lt;br /&gt;
The Arrhenius-Weibull failure rate function, &amp;lt;math&amp;gt;\lambda (T)\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda \left( T,V \right)=\frac{f\left( T,V \right)}{R\left( T,V \right)}=\frac{\beta }{C\cdot {{e}^{\tfrac{B}{V}}}}{{\left( \frac{T}{C\cdot {{e}^{\tfrac{B}{V}}}} \right)}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.9.png|center|800px|Failure rate function for &amp;lt;math&amp;gt;\Beta&amp;lt;1 \,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\Beta=1 \,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\Beta&amp;gt;1 \,\!&amp;lt;/math&amp;gt;.]]&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
====Maximum Likelihood Estimation Method====&lt;br /&gt;
The Arrhenius-Weibull log-likelihood function is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \Lambda = &amp;amp; \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{\beta }{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}}{{\left( \frac{{{T}_{i}}}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{{{T}_{i}}}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}}} \right] \ -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( \frac{T_{i}^{\prime }}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-{{\left( \tfrac{T_{Li}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-{{\left( \tfrac{T_{Ri}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the Arrhenius parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the second Arrhenius parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \beta }=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial C}=0\,\!&amp;lt;/math&amp;gt;, where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \frac{\partial \Lambda }{\partial \beta }=\ &amp;amp; \frac{1}{\beta }\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}+\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\ln \left( \frac{{{T}_{i}}}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right) -\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}{{\left( \frac{{{T}_{i}}}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}\ln \left( \frac{{{T}_{i}}}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right) -\underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }{{\left( \frac{T_{i}^{\prime }}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}\ln \left( \frac{T_{i}^{\prime }}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right) \\ &lt;br /&gt;
 &amp;amp; \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{{{\left( \tfrac{T_{Li}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}\ln \left( \tfrac{T_{Li}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)R_{Li}^{\prime \prime }-{{\left( \tfrac{T_{Ri}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}\ln \left( \tfrac{T_{Ri}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)R_{Ri}^{\prime \prime }}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial B}= -\beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\frac{1}{{{V}_{i}}}+\beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\frac{1}{{{V}_{i}}}{{\left( \frac{{{T}_{i}}}{\widehat{C}{{e}^{\tfrac{\widehat{B}}{{{V}_{i}}}}}} \right)}^{\beta }}+\beta \underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }\frac{1}{{{V}_{i}}}{{\left( \frac{T_{i}^{\prime }}{\widehat{C}{{e}^{\tfrac{\widehat{B}}{{{V}_{i}}}}}} \right)}^{\beta }} +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\beta }{{{V}_{i}}}\frac{{{(T_{Li}^{\prime \prime })}^{\beta }}R_{Li}^{\prime \prime }-{{(T_{Ri}^{\prime \prime })}^{\beta }}R_{Ri}^{\prime \prime }}{{{\left( C{{e}^{\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}\left( R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime } \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial C}= -\frac{\beta }{C}\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}+\frac{\beta }{C}\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}{{\left( \frac{{{T}_{i}}}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}+\frac{\beta }{C}\underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }{{\left( \frac{T_{i}^{\prime }}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }} +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\beta }{C}\frac{{{(T_{Li}^{\prime \prime })}^{\beta }}R_{Li}^{\prime \prime }-{{(T_{Ri}^{\prime \prime })}^{\beta }}R_{Ri}^{\prime \prime }}{{{\left( C{{e}^{\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}\left( R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime } \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Arrhenius-Weibull example==&lt;br /&gt;
{{:Arrhenius_Example}}&lt;br /&gt;
&lt;br /&gt;
=Arrhenius-Lognormal=&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the lognormal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\bar{{{T}&#039;}}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{T}&#039;=\ln(T)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T=\,\!&amp;lt;/math&amp;gt; times-to-failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{T}&#039;=\,\!&amp;lt;/math&amp;gt; mean of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\,\!&amp;lt;/math&amp;gt; standard deviation of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
The median of the lognormal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}={{e}^{{{\overline{T}}^{\prime }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Arrhenius-lognormal model &#039;&#039;pdf&#039;&#039; can be obtained first by setting &amp;lt;math&amp;gt;\breve{T}=L(V)\,\!&amp;lt;/math&amp;gt;. Therefore: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=L(V)=C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{e}^{{{\overline{T}}^{\prime }}}}=C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\overline{T}}^{\prime }}=\ln (C)+\frac{B}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting the above equation into the lognormal &#039;&#039;pdf&#039;&#039; yields the Arrhenius-lognormal model &#039;&#039;pdf&#039;&#039; or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T,V)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (C)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that in the Arrhenius-lognormal &#039;&#039;pdf&#039;&#039;, it was assumed that the standard deviation of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}},\,\!&amp;lt;/math&amp;gt; is independent of stress. This assumption implies that the shape of the distribution does not change with stress ( &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; is the shape parameter of the lognormal distribution).&lt;br /&gt;
&lt;br /&gt;
==Arrhenius-Lognormal Statistical Properties Summary==&lt;br /&gt;
====The Mean====&lt;br /&gt;
*The mean life of the Arrhenius-lognormal model (mean of the times-to-failure), &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \bar{T}= &amp;amp; {{e}^{\bar{{T}&#039;}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}} =\ &amp;amp; {{e}^{\ln (C)+\tfrac{B}{V}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The mean of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\bar{T}}^{^{\prime }}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{T}}^{\prime }}=\ln \left( {\bar{T}} \right)-\frac{1}{2}\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Standard Deviation====&lt;br /&gt;
*The standard deviation of the Arrhenius-lognormal model (standard deviation of the times-to-failure), &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma }_{T}}= &amp;amp; \sqrt{\left( {{e}^{2\bar{{T}&#039;}+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)} =\ &amp;amp; \sqrt{\left( {{e}^{2\left( \ln (C)+\tfrac{B}{V} \right)+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The standard deviation of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\sqrt{\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Mode====&lt;br /&gt;
*The mode of the Arrhenius-lognormal model is given by: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
	  &amp;amp; \tilde{T}=\ {{e}^{{{\overline{T}}^{\prime }}-\sigma _{{{T}&#039;}}^{2}}} =\  {{e}^{\ln (C)+\tfrac{B}{V}-\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
	\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Arrhenius-Lognormal Reliability Function====&lt;br /&gt;
The reliability for a mission of time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, starting at age 0, for the Arrhenius-lognormal model is determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=\int_{T}^{\infty }f(t,V)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=\int_{{{T}^{^{\prime }}}}^{\infty }\frac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\ln (C)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no closed form solution for the lognormal reliability function. Solutions can be obtained via the use of standard normal tables. Since the application automatically solves for the reliability, we will not discuss manual solution methods.&lt;br /&gt;
&lt;br /&gt;
====Reliable Life====&lt;br /&gt;
For the Arrhenius-lognormal model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{T}&#039;_{R}}= ln(C)+\frac{B}{V}+z \cdot {{\sigma}_{{T}&#039;}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z={{\Phi }^{-1}}\left[ F\left( T_{R}^{\prime },V \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;,V)}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;{T}&#039;=\ln (T)\,\!&amp;lt;/math&amp;gt; the reliable life, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}={{e}^{T_{R}^{\prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Arrhenius-Lognormal Failure Rate====&lt;br /&gt;
The Arrhenius-lognormal failure rate is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (T,V)=\frac{f(T,V)}{R(T,V)}=\frac{\tfrac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (C)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}}{\int_{{{T}&#039;}}^{\infty }\tfrac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (C)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
====Maximum Likelihood Estimation Method====&lt;br /&gt;
&lt;br /&gt;
The lognormal log-likelihood function for the Arrhenius-lognormal model is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{{{\sigma }_{{{T}&#039;}}}{{T}_{i}}}\phi \left( \frac{\ln \left( {{T}_{i}} \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] \text{ }+\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\ln \left[ 1-\Phi \left( \frac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Li}^{\prime \prime }=\frac{\ln T_{Li}^{\prime \prime }-\ln C-\tfrac{B}{{{V}_{i}}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Ri}^{\prime \prime }=\frac{\ln T_{Ri}^{\prime \prime }-\ln C-\tfrac{B}{{{V}_{i}}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma}_{{T}&#039;}}\,\!&amp;lt;/math&amp;gt; is the standard deviation of the natural logarithm of the times-to-failure (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the Arrhenius parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the second Arrhenius parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial C}=0\,\!&amp;lt;/math&amp;gt;, where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial B}= \frac{1}{\sigma _{{{T}&#039;}}^{2}}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\frac{1}{{{V}_{i}}}(\ln ({{T}_{i}})-\ln (C)-\frac{B}{{{V}_{i}}}) +\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{1}{{{V}_{i}}}\frac{\phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\varphi (z_{Ri}^{\prime \prime })-\varphi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }{{V}_{i}}(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial C}= \frac{1}{C\cdot \sigma _{{{T}&#039;}}^{2}}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}(\ln ({{T}_{i}})-\ln (C)-\frac{B}{{{V}_{i}}}) +\frac{1}{C\cdot {{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{\phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\varphi (z_{Ri}^{\prime \prime })-\varphi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }C(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \frac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}= \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left( \frac{{{\left( \ln ({{T}_{i}})-\ln (C)-\tfrac{B}{{{V}_{i}}} \right)}^{2}}}{\sigma _{{{T}&#039;}}^{3}}-\frac{1}{{{\sigma }_{{{T}&#039;}}}} \right) +\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{\left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)\phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{z_{Ri}^{\prime \prime }\varphi (z_{Ri}^{\prime \prime })-z_{Li}^{\prime \prime }\varphi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\phi \left( x \right)=\frac{1}{\sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( x \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Arrhenius Confidence Bounds=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Arrhenius_Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
==Approximate Confidence Bounds for the Arrhenius-Exponential==&lt;br /&gt;
There are different methods for computing confidence bounds. ALTA utilizes confidence bounds that are based on the asymptotic theory for maximum likelihood estimates, most commonly referred to as the Fisher matrix bounds.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
====Confidence Bounds on the Mean Life====&lt;br /&gt;
&lt;br /&gt;
The Arrhenius-exponential distribution is given by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt; in the exponential &#039;&#039;pdf&#039;&#039; equation. The upper &amp;lt;math&amp;gt;({{m}_{U}})\,\!&amp;lt;/math&amp;gt; and lower &amp;lt;math&amp;gt;({{m}_{L}})\,\!&amp;lt;/math&amp;gt; bounds on the mean life are then estimated by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 &amp;amp; {{m}_{U}}= \widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}} \\ &lt;br /&gt;
 &amp;amp; {{m}_{L}}= \widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level (i.e., 95%=0.95), then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds, and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds. The variance of &amp;lt;math&amp;gt;\widehat{m}\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{m})= &amp;amp; {{\left( \frac{\partial m}{\partial C} \right)}^{2}}Var(\widehat{C})+{{\left( \frac{\partial m}{\partial B} \right)}^{2}}Var(\widehat{B}) +2\left( \frac{\partial m}{\partial C} \right)\left( \frac{\partial m}{\partial B} \right)Cov(\widehat{B},\widehat{C})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var(\widehat{m})={{e}^{\tfrac{2\widehat{B}}{V}}}\left[ Var(\widehat{C})+\frac{{{\widehat{C}}^{2}}}{{{V}^{2}}}Var(\widehat{B})+\frac{2\widehat{C}}{V}Cov(\widehat{B},\widehat{C}) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariance of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{B}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{C})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{B}) &amp;amp; Cov(\widehat{B},\widehat{C})  \\&lt;br /&gt;
   Cov(\widehat{C},\widehat{B}) &amp;amp; Var(\widehat{C})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial C}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{C}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Confidence Bounds on Reliability====&lt;br /&gt;
The bounds on reliability for any given time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, are estimated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}(T)= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{U}}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}(T)= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{L}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{m}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{m}_{L}}\,\!&amp;lt;/math&amp;gt; are estimated estimated by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 &amp;amp; {{m}_{U}}= \widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}} \\ &lt;br /&gt;
 &amp;amp; {{m}_{L}}= \widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Confidence Bounds on Time====&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{T}=-\widehat{m}\cdot \ln (R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding confidence bounds are then estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= -{{m}_{U}}\cdot \ln (R) \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= -{{m}_{L}}\cdot \ln (R)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{m}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{m}_{L}}\,\!&amp;lt;/math&amp;gt; are estimated estimated by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 &amp;amp; {{m}_{U}}= \widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}} \\ &lt;br /&gt;
 &amp;amp; {{m}_{L}}= \widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the Arrhenius-Weibull==&lt;br /&gt;
====Bounds on the Parameters====&lt;br /&gt;
From the asymptotically normal property of the maximum likelihood estimators, and since &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; are positive parameters, &amp;lt;math&amp;gt;\ln (\widehat{\beta }),\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\ln (\widehat{C})\,\!&amp;lt;/math&amp;gt; can then be treated as normally distributed. After performing this transformation, the bounds on the parameters can be estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\beta }_{U}}= \widehat{\beta }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{\beta }_{L}}= \widehat{\beta }\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
also:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{B}_{U}}= \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})} \\ &lt;br /&gt;
 &amp;amp; {{B}_{L}}= \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{B})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{C}_{U}}= \widehat{C}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}} \\ &lt;br /&gt;
 &amp;amp; {{C}_{L}}= \widehat{C}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C})\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{\beta }) &amp;amp; Cov(\widehat{\beta },\widehat{B}) &amp;amp; Cov(\widehat{\beta },\widehat{C})  \\&lt;br /&gt;
   Cov(\widehat{B},\widehat{\beta }) &amp;amp; Var(\widehat{B}) &amp;amp; Cov(\widehat{B},\widehat{C})  \\&lt;br /&gt;
   Cov(\widehat{C},\widehat{\beta }) &amp;amp; Cov(\widehat{C},\widehat{B}) &amp;amp; Var(\widehat{C})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\beta }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial C}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial C}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{C}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Confidence Bounds on Reliability====&lt;br /&gt;
The reliability function for the Arrhenius-Weibull model (ML estimate) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-{{\left( \tfrac{T}{\widehat{C}\cdot {{e}^{\tfrac{\widehat{B}}{V}}}} \right)}^{\widehat{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T)={{e}^{-{{e}^{\ln \left[ {{\left( \tfrac{T}{\widehat{C}\cdot {{e}^{\tfrac{\widehat{B}}{V}}}} \right)}^{\widehat{\beta }}} \right]}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Setting: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\ln \left[ {{\left( \frac{T}{\widehat{C}\cdot {{e}^{\tfrac{\widehat{B}}{V}}}} \right)}^{\widehat{\beta }}} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\widehat{\beta }\left[ \ln (T)-\ln (\widehat{C})-\frac{\widehat{B}}{V} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-{{e}^{\widehat{u}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to find the upper and lower bounds on &amp;lt;math&amp;gt;\widehat{u}\ \ :\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{u})= {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial B} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\partial \widehat{u}}{\partial C} \right)}^{2}}Var(\widehat{C}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial B} \right)Cov(\widehat{\beta },\widehat{B})+2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{\beta },\widehat{C}) +2\left( \frac{\partial \widehat{u}}{\partial B} \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{B},\widehat{C})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{u})= {{\left( \frac{\widehat{u}}{\widehat{\beta }} \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\widehat{\beta }}{V} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\widehat{\beta }}{\widehat{C}} \right)}^{2}}Var(\widehat{C}) -\frac{2\widehat{u}}{V}Cov(\widehat{\beta },\widehat{B})-\frac{2\widehat{u}}{\widehat{C}}Cov(\widehat{\beta },\widehat{C})+\frac{2{{\widehat{\beta }}^{2}}}{V\widehat{C}}Cov(\widehat{B},\widehat{C})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}(T,V)= {{e}^{-{{e}^{\left( {{u}_{L}} \right)}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}(T,V)= {{e}^{-{{e}^{\left( {{u}_{U}} \right)}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Confidence Bounds on Time====&lt;br /&gt;
The bounds on time for a given reliability are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   \ln (R)&amp;amp;=  -{{\left( \frac{\widehat{T}}{\widehat{C}\cdot {{e}^{\tfrac{\widehat{B}}{V}}}} \right)}^{\widehat{\beta }}} \\ &lt;br /&gt;
  \ln (-\ln (R))&amp;amp;=  \widehat{\beta }\left( \ln \widehat{T}-\ln \widehat{C}-\frac{\widehat{B}}{V} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\frac{1}{\widehat{\beta }}\ln (-\ln (R))+\ln \widehat{C}+\frac{\widehat{B}}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\widehat{u}=\ln \widehat{T}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial B} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\partial \widehat{u}}{\partial C} \right)}^{2}}Var(\widehat{C}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial B} \right)Cov(\widehat{\beta },\widehat{B})+2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{\beta },\widehat{C}) +2\left( \frac{\partial \widehat{u}}{\partial B} \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{B},\widehat{C})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{u})= &amp;amp; \frac{1}{{{\widehat{\beta }}^{4}}}{{\left[ \ln (-\ln (R)) \right]}^{2}}Var(\widehat{\beta })+\frac{1}{{{V}^{2}}}Var(\widehat{B})+\frac{1}{{{\widehat{C}}^{2}}}Var(\widehat{C})-\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}V}Cov(\widehat{\beta },\widehat{B})-\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}\widehat{C}}Cov(\widehat{\beta },\widehat{C}) +\frac{2}{V\widehat{C}}Cov(\widehat{B},\widehat{C})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on time can then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{{{u}_{U}}}} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{{{u}_{L}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the Arrhenius-Lognormal==&lt;br /&gt;
====Bounds on the Parameters====&lt;br /&gt;
The lower and upper bounds on &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{B}_{U}}= \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{B}_{L}}= \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{B})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the standard deviation, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, and the parameter &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; are positive parameters, &amp;lt;math&amp;gt;\ln ({{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\ln (\widehat{C})\,\!&amp;lt;/math&amp;gt; are treated as normally distributed. The bounds are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{C}_{U}}= \widehat{C}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{C}_{L}}= \frac{\widehat{C}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma }_{U}}= {{\widehat{\sigma }}_{{{T}&#039;}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{\sigma }_{L}}= \frac{{{\widehat{\sigma }}_{{{T}&#039;}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}}),\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var\left( {{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{B} \right) &amp;amp; Var\left( \widehat{B} \right) &amp;amp; Cov\left( \widehat{B},\widehat{C} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{C} \right) &amp;amp; Cov\left( \widehat{C},\widehat{B} \right) &amp;amp; Var\left( \widehat{C} \right)  \\&lt;br /&gt;
\end{matrix} \right]= {{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma _{{{T}&#039;}}^{2}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial C}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial C}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{C}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Bounds on Reliability====&lt;br /&gt;
The reliability of the lognormal distribution is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({T}&#039;,V;B,C,{{\sigma }_{{{T}&#039;}}})=\int_{{{T}&#039;}}^{\infty }\frac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\ln (\widehat{C})-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\widehat{z}(t,V;B,C,{{\sigma }_{T}})=\tfrac{t-\ln (\widehat{C})-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}},\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;\frac{d \widehat{z}}{dt}=\frac{1}{{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;t={T}&#039;\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{z}=\tfrac{{T}&#039;-\ln (\widehat{C})-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}\,\!&amp;lt;/math&amp;gt;, and for &amp;lt;math&amp;gt;t=\infty ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{z}=\infty .\,\!&amp;lt;/math&amp;gt; The above equation then becomes: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(\widehat{z})=\int_{\widehat{z}({T}&#039;)}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The bounds on &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{z}_{U}}= &amp;amp; \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ &lt;br /&gt;
 &amp;amp; {{z}_{L}}= &amp;amp; \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{z})=&amp;amp; \left( \frac{\partial \widehat{z}}{\partial B} \right)_{\widehat{B}}^{2}Var(\widehat{B})+\left( \frac{\partial \widehat{z}}{\partial C} \right)_{\widehat{C}}^{2}Var(\widehat{C})+\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)_{{{\widehat{\sigma }}_{{{T}&#039;}}}}^{2}Var({{\widehat{\sigma }}_{T}}) +2{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}{{\left( \frac{\partial \widehat{z}}{\partial C} \right)}_{\widehat{C}}}Cov\left( \widehat{B},\widehat{C} \right) \\ &lt;br /&gt;
 &amp;amp;  +2{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{B},{{\widehat{\sigma }}_{T}} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial C} \right)}_{\widehat{C}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{C},{{\widehat{\sigma }}_{T}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{z})= &amp;amp; \frac{1}{\widehat{\sigma }_{{{T}&#039;}}^{2}}[\frac{1}{{{V}^{2}}}Var(\widehat{B})+\frac{1}{{{C}^{2}}}Var(\widehat{C})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2}{C\cdot V}Cov\left( \widehat{B},\widehat{C} \right)+\frac{2\widehat{z}}{V}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)+\frac{2\widehat{z}}{C}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Confidence Bounds on Time====&lt;br /&gt;
The bounds around time, for a given lognormal percentile (unreliability), are estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{T}&#039;(V;\widehat{B},\widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}})=\ln (\widehat{C})+\frac{\widehat{B}}{V}+z\cdot {{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {T}&#039;(V;\widehat{B},\widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}})=&amp;amp;\ \ln (T) \\ &lt;br /&gt;
  z= &amp;amp; \ {{\Phi }^{-1}}\left[ F({T}&#039;) \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;)}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to calculate the variance of &amp;lt;math&amp;gt;{T}&#039;(V;\widehat{B},\widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}}):\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var({T}&#039;)= &amp;amp; {{\left( \frac{\partial {T}&#039;}{\partial B} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\partial {T}&#039;}{\partial C} \right)}^{2}}Var(\widehat{C})+{{\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2\left( \frac{\partial {T}&#039;}{\partial B} \right)\left( \frac{\partial {T}&#039;}{\partial C} \right)Cov\left( \widehat{B},\widehat{C} \right) \\ &lt;br /&gt;
 &amp;amp;  +2\left( \frac{\partial {T}&#039;}{\partial B} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2\left( \frac{\partial {T}&#039;}{\partial C} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var({T}&#039;)= \frac{1}{{{V}^{2}}}Var(\widehat{B})+\frac{1}{{{C}^{2}}}Var(\widehat{C})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2}{B\cdot C}Cov\left( \widehat{B},\widehat{C} \right) +\frac{2\widehat{z}}{V}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +\frac{2\widehat{z}}{C}Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; T_{U}^{\prime }= &amp;amp; \ln {{T}_{U}}={T}&#039;+{{K}_{\alpha }}\sqrt{Var({T}&#039;)} \\ &lt;br /&gt;
 &amp;amp; T_{L}^{\prime }= &amp;amp; \ln {{T}_{L}}={T}&#039;-{{K}_{\alpha }}\sqrt{Var({T}&#039;)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for &amp;lt;math&amp;gt;{{T}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{T}_{L}}\,\!&amp;lt;/math&amp;gt; yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{T_{U}^{\prime }}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{T_{L}^{\prime }}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Arrhenius_Relationship&amp;diff=64925</id>
		<title>Arrhenius Relationship</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Arrhenius_Relationship&amp;diff=64925"/>
		<updated>2017-02-08T21:01:54Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional Reliability */ changed R(t,T,V) to R((t|T),V)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|4}}&lt;br /&gt;
The Arrhenius life-stress model (or relationship) is probably the most common life-stress relationship utilized in accelerated life testing. It has been widely used when the stimulus or acceleration variable (or stress) is thermal (i.e., temperature). It is derived from the Arrhenius reaction rate equation proposed by the Swedish physical chemist Svandte Arrhenius in 1887. &lt;br /&gt;
&lt;br /&gt;
===Formulation===&lt;br /&gt;
The Arrhenius reaction rate equation is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T)=A{{e}^{-\tfrac{{{E}_{a}}}{k\cdot T}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;R\,\!&amp;lt;/math&amp;gt; is the speed of reaction.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is an unknown nonthermal constant.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{E}_{a}}\,\!&amp;lt;/math&amp;gt; is the activation energy &amp;lt;math&amp;gt;(\text{eV})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; is the Boltzmann&#039;s constant &amp;lt;math&amp;gt;(8.617385\times {{10}^{-5}}\text{eV}{{\text{K}}^{-1}})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is the absolute temperature &amp;lt;math&amp;gt;(\text{K})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The activation energy is the energy that a molecule must have to participate in the reaction. In other words, the activation energy is a measure of the effect that temperature has on the reaction.&lt;br /&gt;
&lt;br /&gt;
The Arrhenius life-stress model is formulated by assuming that life is proportional to the inverse reaction rate of the process, thus the Arrhenius life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; represents a quantifiable life measure, such as mean life, characteristic life, median life, or &amp;lt;math&amp;gt;B(x)\,\!&amp;lt;/math&amp;gt; life, etc.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; represents the stress level (formulated for temperature and &#039;&#039;&#039;temperature values in absolute units, degrees Kelvin or degrees Rankine&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is one of the model parameters to be determined, &amp;lt;math&amp;gt;(C&amp;gt;0)\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is another model parameter to be determined.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.1.png|center|400px|Graphical look at the Arrhenius life-stress relationship (linear scale) for a different life characteristics, assuming a Weibull distribution.]]&lt;br /&gt;
&lt;br /&gt;
Since the Arrhenius is a physics-based model derived for temperature dependence, it is used for temperature accelerated tests. For the same reason, temperature values must be in absolute units (Kelvin or Rankine), even though the Arrhenius equation is unitless.&lt;br /&gt;
&lt;br /&gt;
===Life Stress Plots===&lt;br /&gt;
The Arrhenius relationship can be linearized and plotted on a Life vs. Stress plot, also called the Arrhenius plot. The relationship is linearized by taking the natural logarithm of both sides in the Arrhenius equation or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;ln(L(V))=ln(C)+\frac{B}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.2.png|center|400px|Arrhenius plot for Weibull life distribution.]]&lt;br /&gt;
&lt;br /&gt;
In the linearized Arrhenius equation, &amp;lt;math&amp;gt;\ln (C)\,\!&amp;lt;/math&amp;gt; is the intercept of the line and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the slope of the line. Note that the inverse of the stress, and not the stress, is the variable. In the above figure, life is plotted versus stress and not versus the inverse stress. This is because the linearized Arrhenius equation was plotted on a reciprocal scale. On such a scale, the slope &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; appears to be negative even though it has a positive value. This is because &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is actually the slope of the reciprocal of the stress and not the slope of the stress. The reciprocal of the stress is decreasing as stress is increasing ( &amp;lt;math&amp;gt;\tfrac{1}{V}\,\!&amp;lt;/math&amp;gt; is decreasing as &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; is increasing). The two different axes are shown in the next figure.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.3.png|center|400px|An illustration of both reciprocal and non-reciprocal scales.]]&lt;br /&gt;
&lt;br /&gt;
The Arrhenius relationship is plotted on a reciprocal scale for practical reasons. For example, in the above figure it is more convenient to locate the life corresponding to a stress level of 370K than to take the reciprocal of 370K (0.0027) first, and then locate the corresponding life.&lt;br /&gt;
The shaded areas shown in the above figure are the imposed at each test stress level. From such imposed  &#039;&#039;pdfs&#039;&#039;  one can see the range of the life at each test stress level, as well as the scatter in life. The next figure illustrates a case in which there is a significant scatter in life at each of the test stress levels.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.4.png|center|400px|An example of scatter in life at each test stress level.]]&lt;br /&gt;
&lt;br /&gt;
===Activation Energy and the Parameter &#039;&#039;B&#039;&#039; ===&lt;br /&gt;
Depending on the application (and where the stress is exclusively thermal), the parameter &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; can be replaced by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;B=\frac{{{E}_{a}}}{k}=\frac{\text{activation energy}}{\text{Boltzman}{{\text{n}}^{\prime }}\text{s constant}}=\frac{\text{activation energy}}{8.617385\times {{10}^{-5}}\text{eV}{{\text{K}}^{-1}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that in this formulation, the activation energy &amp;lt;math&amp;gt;{{E}_{a}}\,\!&amp;lt;/math&amp;gt; must be known a priori. If the activation energy is known then there is only one model parameter remaining, &amp;lt;math&amp;gt;C.\,\!&amp;lt;/math&amp;gt; Because in most real life situations this is rarely the case, all subsequent formulations will assume that this activation energy is unknown and treat &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; as one of the model parameters. &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has the same properties as the activation energy. In other words, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is a measure of the effect that the stress (i.e. temperature) has on the life. The larger the value of &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; the higher the dependency of the life on the specific stress. Parameter &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; may also take negative values. In that case, life is increasing with increasing stress. An example of this would be plasma filled bulbs, where low temperature is a higher stress on the bulbs than high temperature.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.5.png|center|400px|Behavior of the parameter &#039;&#039;B&#039;&#039;.]]&lt;br /&gt;
&lt;br /&gt;
===Acceleration Factor===&lt;br /&gt;
Most practitioners use the term acceleration factor to refer to the ratio of the life (or acceleration characteristic) between the use level and a higher test stress level or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{F}}=\frac{{{L}_{USE}}}{{{L}_{Accelerated}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Arrhenius model this factor is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{F}}=\frac{{{L}_{USE}}}{{{L}_{Accelerated}}}=\frac{C\text{ }{{e}^{\tfrac{B}{{{V}_{u}}}}}}{C\text{ }{{e}^{\tfrac{B}{{{V}_{A}}}}}}=\frac{\text{ }{{e}^{\tfrac{B}{{{V}_{u}}}}}}{\text{ }{{e}^{\tfrac{B}{{{V}_{A}}}}}}={{e}^{\left( \tfrac{B}{{{V}_{u}}}-\tfrac{B}{{{V}_{A}}} \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, if &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is assumed to be known a priori (using an activation energy), the assumed activation energy alone dictates this acceleration factor!&lt;br /&gt;
&lt;br /&gt;
=Arrhenius-Exponential=&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the 1-parameter exponential distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
f(t)=\lambda {{e}^{-\lambda t}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be easily shown that the mean life for the 1-parameter exponential distribution (presented in detail [[Distributions used in Accelerated Testing#The Exponential Distribution|here]]) is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda =\frac{1}{m}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
thus:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{1}{m}{{e}^{-\tfrac{t}{m}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Arrhenius-exponential model &#039;&#039;pdf&#039;&#039; can then be obtained by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
Therefore:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;m=L(V)=C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting for &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; yields a &#039;&#039;pdf&#039;&#039; that is both a function of time and stress or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=\frac{1}{C{{e}^{\tfrac{B}{V}}}}\cdot {{e}^{-\tfrac{1}{C{{e}^{\tfrac{B}{V}}}}\cdot t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Arrhenius-Exponential Statistical Properties Summary==&lt;br /&gt;
====Mean or MTTF====&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T},\,\!&amp;lt;/math&amp;gt; or Mean Time To Failure (MTTF) of the Arrhenius-exponential is given by,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   \overline{T}=\int_{0}^{\infty }t\cdot f(t,V)dt=\int_{0}^{\infty }t\cdot \frac{1}{C{{e}^{\tfrac{B}{V}}}}{{e}^{-\tfrac{t}{C{{e}^{\tfrac{B}{V}}}}}}dt =\  C{{e}^{\tfrac{B}{V}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Median====&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T}\,\!&amp;lt;/math&amp;gt; of the Arrhenius-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=0.693\cdot C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Mode====&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; of the Arrhenius-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Standard Deviation====&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, of the Arrhenius-exponential model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Arrhenius-Exponential Reliability Function====&lt;br /&gt;
The Arrhenius-exponential reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)={{e}^{-\tfrac{T}{C{{e}^{\tfrac{B}{V}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This function is the complement of the Arrhenius-exponential cumulative distribution function or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=1-Q(T,V)=1-\int_{0}^{T}f(T,V)dT\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=1-\int_{0}^{T}\frac{1}{C{{e}^{\tfrac{B}{V}}}}{{e}^{-\tfrac{T}{C{{e}^{\tfrac{B}{V}}}}}}dT={{e}^{-\tfrac{T}{C{{e}^{\tfrac{B}{V}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Conditional Reliability====&lt;br /&gt;
The Arrhenius-exponential conditional reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R((t|T),V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-\lambda (T+t)}}}{{{e}^{-\lambda T}}}={{e}^{-\tfrac{t}{C{{e}^{\tfrac{B}{V}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Reliable Life====&lt;br /&gt;
For the Arrhenius-exponential model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({{t}_{R}},V)={{e}^{-\tfrac{{{t}_{R}}}{C{{e}^{\tfrac{B}{V}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\ln [R({{t}_{R}},V)]=-\frac{{{t}_{R}}}{C{{e}^{\tfrac{B}{V}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=-C{{e}^{\tfrac{B}{V}}}\ln [R({{t}_{R}},V)]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
====Maximum Likelihood Estimation Method====&lt;br /&gt;
The log-likelihood function for the exponential distribution is as shown next:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \lambda {{e}^{-\lambda {{T}_{i}}}} \right]-\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\lambda T_{i}^{\prime } \ &lt;br /&gt;
 &amp;amp; \overset{FI}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-\lambda T_{Li}^{\prime \prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-\lambda T_{Ri}^{\prime \prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is the failure rate parameter (unknown).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
Substituting the Arrhenius-exponential model into the log-likelihood function yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \Lambda = \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}}{{e}^{-\tfrac{1}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}}{{T}_{i}}}} \right] -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{1}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}}T_{i}^{\prime }+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-\tfrac{T_{Li}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-\tfrac{T_{Ri}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for the parameters &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial C}=0\,\!&amp;lt;/math&amp;gt;, where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial B}= &amp;amp; \frac{1}{C}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left( \frac{{{T}_{i}}}{{{V}_{i}}{{e}^{\tfrac{B}{{{V}_{i}}}}}}-\frac{C}{{{V}_{i}}} \right)+\frac{1}{C}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{T_{i}^{\prime }}{{{V}_{i}}{{e}^{\tfrac{B}{{{V}_{i}}}}}} \overset{FI}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{T_{Li}^{\prime \prime }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime }R_{Ri}^{\prime \prime }}{(R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime })C{{V}_{i}}{{e}^{\tfrac{B}{{{V}_{i}}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial C}= &amp;amp; \frac{1}{C}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left( \frac{{{T}_{i}}}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}}-1 \right)+\frac{1}{{{C}^{2}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{T_{i}^{\prime }}{{{e}^{\tfrac{B}{{{V}_{i}}}}}} \overset{FI}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{T_{Li}^{\prime \prime }R_{Li}^{\prime \prime }-T_{Ri}^{\prime \prime }R_{Ri}^{\prime \prime }}{(R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }){{C}^{2}}{{e}^{\tfrac{B}{{{V}_{i}}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Arrhenius-Weibull=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Arrhenius Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; for the 2-parameter Weibull distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scale parameter (or characteristic life) of the Weibull distribution is &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The Arrhenius-Weibull model &#039;&#039;pdf&#039;&#039; can then be obtained by setting &amp;lt;math&amp;gt;\eta =L(V)\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\eta =L(V)=C\cdot {{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and substituting for &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; in the 2-parameter Weibull distribution equation:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=\frac{\beta }{C\cdot {{e}^{\tfrac{B}{V}}}}{{\left( \frac{t}{C\cdot {{e}^{\tfrac{B}{V}}}} \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{t}{C\cdot {{e}^{\tfrac{B}{V}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An illustration of the &#039;&#039;pdf&#039;&#039;  for different stresses is shown in the next figure.  As expected, the &#039;&#039;pdf&#039;&#039; at lower stress levels is more stretched to the right, with a higher scale parameter, while its shape remains the same (the shape parameter is approximately 3). This behavior is observed when the parameter &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; of the Arrhenius model is positive.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.6.png|center|400px|Behavior of the probability density function at different stresses and with the parameters held constant.]]&lt;br /&gt;
&lt;br /&gt;
The advantage of using the Weibull distribution as the life distribution lies in its flexibility to assume different shapes. The Weibull distribution is presented in greater detail in [[The Weibull Distribution]].&lt;br /&gt;
&lt;br /&gt;
==Arrhenius-Weibull Statistical Properties Summary==&lt;br /&gt;
====Mean or MTTF====&lt;br /&gt;
The mean, &amp;lt;math&amp;gt;\overline{T}\,\!&amp;lt;/math&amp;gt; (also called &amp;lt;math&amp;gt;MTTF\,\!&amp;lt;/math&amp;gt; by some authors), of the Arrhenius-Weibull relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=C\cdot {{e}^{\tfrac{B}{V}}}\cdot \Gamma \left( \frac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Gamma \left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt; is the gamma function evaluated at the value of &amp;lt;math&amp;gt;\left( \tfrac{1}{\beta }+1 \right)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Median====&lt;br /&gt;
The median, &amp;lt;math&amp;gt;\breve{T},\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Arrhenius-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=C\cdot {{e}^{\tfrac{B}{V}}}{{\left( \ln 2 \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Mode====&lt;br /&gt;
The mode, &amp;lt;math&amp;gt;\tilde{T},\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
for the Arrhenius-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\tilde{T}=C\cdot {{e}^{\tfrac{B}{V}}}{{\left( 1-\frac{1}{\beta } \right)}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Standard Deviation====&lt;br /&gt;
The standard deviation, &amp;lt;math&amp;gt;{{\sigma }_{T}},\,\!&amp;lt;/math&amp;gt; for the Arrhenius-Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{T}}=C\cdot {{e}^{\tfrac{B}{V}}}\cdot \sqrt{\Gamma \left( \frac{2}{\beta }+1 \right)-{{\left( \Gamma \left( \frac{1}{\beta }+1 \right) \right)}^{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Arrhenius-Weibull Reliability Function====&lt;br /&gt;
The Arrhenius-Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)={{e}^{-{{\left( \tfrac{T}{C\cdot {{e}^{\tfrac{B}{V}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the parameter &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is positive, then the reliability increases as stress decreases.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.7.png|center|500px|Behavior of the reliability function at different stress and constant parameter values.]]&lt;br /&gt;
&lt;br /&gt;
The behavior of the reliability function of the Weibull distribution for different values of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; was illustrated [[Distributions used in Accelerated Testing#The Weibull Distribution|here]]. In the case of the Arrhenius-Weibull model, however, the reliability is a function of stress also. A 3D plot such as the ones shown in the next figure is now needed to illustrate the effects of both the stress and &amp;lt;math&amp;gt;\beta .\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.8.png|center|800px|Reliability function for &amp;lt;math&amp;gt;\Beta&amp;lt;1 \,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\Beta=1 \,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\Beta&amp;gt;1 \,\!&amp;lt;/math&amp;gt;.]]&lt;br /&gt;
&lt;br /&gt;
====Conditional Reliability Function====&lt;br /&gt;
The Arrhenius-Weibull conditional reliability function at a specified stress level is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,t,V)=\frac{R(T+t,V)}{R(T,V)}=\frac{{{e}^{-{{\left( \tfrac{T+t}{\eta } \right)}^{\beta }}}}}{{{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,t,V)={{e}^{-\left[ {{\left( \tfrac{T+t}{C\cdot {{e}^{\tfrac{B}{V}}}} \right)}^{\beta }}-{{\left( \tfrac{T}{C\cdot {{e}^{\tfrac{B}{V}}}} \right)}^{\beta }} \right]}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Reliable Life====&lt;br /&gt;
For the Arrhenius-Weibull relationship, the reliable life, &amp;lt;math&amp;gt;{{t}_{R}}\,\!&amp;lt;/math&amp;gt;, of a unit for a specified reliability and starting the mission at age zero is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}=C\cdot {{e}^{\tfrac{B}{V}}}{{\left\{ -\ln \left[ R\left( {{t}_{R}},V \right) \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the life for which the unit will function successfully with a reliability of &amp;lt;math&amp;gt;R({{t}_{R}})\,\!&amp;lt;/math&amp;gt;. If &amp;lt;math&amp;gt;R({{t}_{R}})=0.50\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;{{t}_{R}}=\breve{T}\,\!&amp;lt;/math&amp;gt;, the median life, or the life by which half of the units will survive.&lt;br /&gt;
&lt;br /&gt;
====Arrhenius-Weibull Failure Rate Function====&lt;br /&gt;
The Arrhenius-Weibull failure rate function, &amp;lt;math&amp;gt;\lambda (T)\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda \left( T,V \right)=\frac{f\left( T,V \right)}{R\left( T,V \right)}=\frac{\beta }{C\cdot {{e}^{\tfrac{B}{V}}}}{{\left( \frac{T}{C\cdot {{e}^{\tfrac{B}{V}}}} \right)}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA6.9.png|center|800px|Failure rate function for &amp;lt;math&amp;gt;\Beta&amp;lt;1 \,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\Beta=1 \,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\Beta&amp;gt;1 \,\!&amp;lt;/math&amp;gt;.]]&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
====Maximum Likelihood Estimation Method====&lt;br /&gt;
The Arrhenius-Weibull log-likelihood function is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \Lambda = &amp;amp; \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{\beta }{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}}{{\left( \frac{{{T}_{i}}}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{{{T}_{i}}}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}}} \right] \ -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( \frac{T_{i}^{\prime }}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Li}^{\prime \prime }={{e}^{-{{\left( \tfrac{T_{Li}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R_{Ri}^{\prime \prime }={{e}^{-{{\left( \tfrac{T_{Ri}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the Arrhenius parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the second Arrhenius parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial \beta }=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial C}=0\,\!&amp;lt;/math&amp;gt;, where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \frac{\partial \Lambda }{\partial \beta }=\ &amp;amp; \frac{1}{\beta }\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}+\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\ln \left( \frac{{{T}_{i}}}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right) -\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}{{\left( \frac{{{T}_{i}}}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}\ln \left( \frac{{{T}_{i}}}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right) -\underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }{{\left( \frac{T_{i}^{\prime }}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}\ln \left( \frac{T_{i}^{\prime }}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right) \\ &lt;br /&gt;
 &amp;amp; \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{{{\left( \tfrac{T_{Li}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}\ln \left( \tfrac{T_{Li}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)R_{Li}^{\prime \prime }-{{\left( \tfrac{T_{Ri}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}\ln \left( \tfrac{T_{Ri}^{\prime \prime }}{C{{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)R_{Ri}^{\prime \prime }}{R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial B}= -\beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\frac{1}{{{V}_{i}}}+\beta \underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}\frac{1}{{{V}_{i}}}{{\left( \frac{{{T}_{i}}}{\widehat{C}{{e}^{\tfrac{\widehat{B}}{{{V}_{i}}}}}} \right)}^{\beta }}+\beta \underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }\frac{1}{{{V}_{i}}}{{\left( \frac{T_{i}^{\prime }}{\widehat{C}{{e}^{\tfrac{\widehat{B}}{{{V}_{i}}}}}} \right)}^{\beta }} +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\beta }{{{V}_{i}}}\frac{{{(T_{Li}^{\prime \prime })}^{\beta }}R_{Li}^{\prime \prime }-{{(T_{Ri}^{\prime \prime })}^{\beta }}R_{Ri}^{\prime \prime }}{{{\left( C{{e}^{\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}\left( R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime } \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial C}= -\frac{\beta }{C}\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}+\frac{\beta }{C}\underset{i=1}{\overset{{{F}_{e}}}{\mathop{\sum }}}\,{{N}_{i}}{{\left( \frac{{{T}_{i}}}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }}+\frac{\beta }{C}\underset{i=1}{\overset{S}{\mathop{\sum }}}\,N_{i}^{\prime }{{\left( \frac{T_{i}^{\prime }}{C\cdot {{e}^{\tfrac{B}{{{V}_{i}}}}}} \right)}^{\beta }} +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\beta }{C}\frac{{{(T_{Li}^{\prime \prime })}^{\beta }}R_{Li}^{\prime \prime }-{{(T_{Ri}^{\prime \prime })}^{\beta }}R_{Ri}^{\prime \prime }}{{{\left( C{{e}^{\tfrac{B}{{{V}_{i}}}}} \right)}^{\beta }}\left( R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime } \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Arrhenius-Weibull example==&lt;br /&gt;
{{:Arrhenius_Example}}&lt;br /&gt;
&lt;br /&gt;
=Arrhenius-Lognormal=&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; of the lognormal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\bar{{{T}&#039;}}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{T}&#039;=\ln(T)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T=\,\!&amp;lt;/math&amp;gt; times-to-failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{T}&#039;=\,\!&amp;lt;/math&amp;gt; mean of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\,\!&amp;lt;/math&amp;gt; standard deviation of the natural logarithms of the times-to-failure.&lt;br /&gt;
&lt;br /&gt;
The median of the lognormal distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}={{e}^{{{\overline{T}}^{\prime }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Arrhenius-lognormal model &#039;&#039;pdf&#039;&#039; can be obtained first by setting &amp;lt;math&amp;gt;\breve{T}=L(V)\,\!&amp;lt;/math&amp;gt;. Therefore: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\breve{T}=L(V)=C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{e}^{{{\overline{T}}^{\prime }}}}=C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\overline{T}}^{\prime }}=\ln (C)+\frac{B}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting the above equation into the lognormal &#039;&#039;pdf&#039;&#039; yields the Arrhenius-lognormal model &#039;&#039;pdf&#039;&#039; or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(T,V)=\frac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (C)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that in the Arrhenius-lognormal &#039;&#039;pdf&#039;&#039;, it was assumed that the standard deviation of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}},\,\!&amp;lt;/math&amp;gt; is independent of stress. This assumption implies that the shape of the distribution does not change with stress ( &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; is the shape parameter of the lognormal distribution).&lt;br /&gt;
&lt;br /&gt;
==Arrhenius-Lognormal Statistical Properties Summary==&lt;br /&gt;
====The Mean====&lt;br /&gt;
*The mean life of the Arrhenius-lognormal model (mean of the times-to-failure), &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \bar{T}= &amp;amp; {{e}^{\bar{{T}&#039;}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}} =\ &amp;amp; {{e}^{\ln (C)+\tfrac{B}{V}+\tfrac{1}{2}\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The mean of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\bar{T}}^{^{\prime }}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{T}}^{\prime }}=\ln \left( {\bar{T}} \right)-\frac{1}{2}\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Standard Deviation====&lt;br /&gt;
*The standard deviation of the Arrhenius-lognormal model (standard deviation of the times-to-failure), &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt;, is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma }_{T}}= &amp;amp; \sqrt{\left( {{e}^{2\bar{{T}&#039;}+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)} =\ &amp;amp; \sqrt{\left( {{e}^{2\left( \ln (C)+\tfrac{B}{V} \right)+\sigma _{{{T}&#039;}}^{2}}} \right)\left( {{e}^{\sigma _{{{T}&#039;}}^{2}}}-1 \right)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The standard deviation of the natural logarithms of the times-to-failure, &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, in terms of &amp;lt;math&amp;gt;\bar{T}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{T}}\,\!&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}=\sqrt{\ln \left( \frac{\sigma _{T}^{2}}{{{{\bar{T}}}^{2}}}+1 \right)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Mode====&lt;br /&gt;
*The mode of the Arrhenius-lognormal model is given by: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
	  &amp;amp; \tilde{T}=\ {{e}^{{{\overline{T}}^{\prime }}-\sigma _{{{T}&#039;}}^{2}}} =\  {{e}^{\ln (C)+\tfrac{B}{V}-\sigma _{{{T}&#039;}}^{2}}}  &lt;br /&gt;
	\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Arrhenius-Lognormal Reliability Function====&lt;br /&gt;
The reliability for a mission of time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, starting at age 0, for the Arrhenius-lognormal model is determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=\int_{T}^{\infty }f(t,V)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,V)=\int_{{{T}^{^{\prime }}}}^{\infty }\frac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\ln (C)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no closed form solution for the lognormal reliability function. Solutions can be obtained via the use of standard normal tables. Since the application automatically solves for the reliability, we will not discuss manual solution methods.&lt;br /&gt;
&lt;br /&gt;
====Reliable Life====&lt;br /&gt;
For the Arrhenius-lognormal model, the reliable life, or the mission duration for a desired reliability goal, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{T}&#039;_{R}}= ln(C)+\frac{B}{V}+z \cdot {{\sigma}_{{T}&#039;}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z={{\Phi }^{-1}}\left[ F\left( T_{R}^{\prime },V \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;,V)}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;{T}&#039;=\ln (T)\,\!&amp;lt;/math&amp;gt; the reliable life, &amp;lt;math&amp;gt;{{t}_{R}},\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{t}_{R}}={{e}^{T_{R}^{\prime }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Arrhenius-Lognormal Failure Rate====&lt;br /&gt;
The Arrhenius-lognormal failure rate is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (T,V)=\frac{f(T,V)}{R(T,V)}=\frac{\tfrac{1}{T\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (C)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}}{\int_{{{T}&#039;}}^{\infty }\tfrac{1}{{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-\ln (C)-\tfrac{B}{V}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}dt}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Parameter Estimation==&lt;br /&gt;
====Maximum Likelihood Estimation Method====&lt;br /&gt;
&lt;br /&gt;
The lognormal log-likelihood function for the Arrhenius-lognormal model is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \frac{1}{{{\sigma }_{{{T}&#039;}}}{{T}_{i}}}\phi \left( \frac{\ln \left( {{T}_{i}} \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] \text{ }+\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\ln \left[ 1-\Phi \left( \frac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right) \right] +\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Li}^{\prime \prime }=\frac{\ln T_{Li}^{\prime \prime }-\ln C-\tfrac{B}{{{V}_{i}}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z_{Ri}^{\prime \prime }=\frac{\ln T_{Ri}^{\prime \prime }-\ln C-\tfrac{B}{{{V}_{i}}}}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure data points in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\sigma}_{{T}&#039;}}\,\!&amp;lt;/math&amp;gt; is the standard deviation of the natural logarithm of the times-to-failure (unknown, the first of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the Arrhenius parameter (unknown, the second of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the second Arrhenius parameter (unknown, the third of three parameters to be estimated).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{V}_{i}}\,\!&amp;lt;/math&amp;gt; is the stress level of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
The solution (parameter estimates) will be found by solving for &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}=0,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial B}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial C}=0\,\!&amp;lt;/math&amp;gt;, where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial B}= \frac{1}{\sigma _{{{T}&#039;}}^{2}}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\frac{1}{{{V}_{i}}}(\ln ({{T}_{i}})-\ln (C)-\frac{B}{{{V}_{i}}}) +\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{1}{{{V}_{i}}}\frac{\phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\varphi (z_{Ri}^{\prime \prime })-\varphi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }{{V}_{i}}(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \frac{\partial \Lambda }{\partial C}= \frac{1}{C\cdot \sigma _{{{T}&#039;}}^{2}}\underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}(\ln ({{T}_{i}})-\ln (C)-\frac{B}{{{V}_{i}}}) +\frac{1}{C\cdot {{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{\phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{\varphi (z_{Ri}^{\prime \prime })-\varphi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }C(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp;  \\ &lt;br /&gt;
 &amp;amp; \frac{\partial \Lambda }{\partial {{\sigma }_{{{T}&#039;}}}}= \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\left( \frac{{{\left( \ln ({{T}_{i}})-\ln (C)-\tfrac{B}{{{V}_{i}}} \right)}^{2}}}{\sigma _{{{T}&#039;}}^{3}}-\frac{1}{{{\sigma }_{{{T}&#039;}}}} \right) +\frac{1}{{{\sigma }_{{{T}&#039;}}}}\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }\frac{\left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)\phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)}{1-\Phi \left( \tfrac{\ln \left( T_{i}^{\prime } \right)-\ln (C)-\tfrac{B}{{{V}_{i}}}}{{{\sigma }_{{{T}&#039;}}}} \right)} \overset{FI}{\mathop{\underset{i=1}{\mathop{-\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\frac{z_{Ri}^{\prime \prime }\varphi (z_{Ri}^{\prime \prime })-z_{Li}^{\prime \prime }\varphi (z_{Li}^{\prime \prime })}{\sigma _{T}^{\prime }(\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime }))}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\phi \left( x \right)=\frac{1}{\sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( x \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Arrhenius Confidence Bounds=&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Arrhenius_Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
==Approximate Confidence Bounds for the Arrhenius-Exponential==&lt;br /&gt;
There are different methods for computing confidence bounds. ALTA utilizes confidence bounds that are based on the asymptotic theory for maximum likelihood estimates, most commonly referred to as the Fisher matrix bounds.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
====Confidence Bounds on the Mean Life====&lt;br /&gt;
&lt;br /&gt;
The Arrhenius-exponential distribution is given by setting &amp;lt;math&amp;gt;m=L(V)\,\!&amp;lt;/math&amp;gt; in the exponential &#039;&#039;pdf&#039;&#039; equation. The upper &amp;lt;math&amp;gt;({{m}_{U}})\,\!&amp;lt;/math&amp;gt; and lower &amp;lt;math&amp;gt;({{m}_{L}})\,\!&amp;lt;/math&amp;gt; bounds on the mean life are then estimated by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 &amp;amp; {{m}_{U}}= \widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}} \\ &lt;br /&gt;
 &amp;amp; {{m}_{L}}= \widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{K}_{\alpha }}\,\!&amp;lt;/math&amp;gt; is defined by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\delta \,\!&amp;lt;/math&amp;gt; is the confidence level (i.e., 95%=0.95), then &amp;lt;math&amp;gt;\alpha =\tfrac{1-\delta }{2}\,\!&amp;lt;/math&amp;gt; for the two-sided bounds, and &amp;lt;math&amp;gt;\alpha =1-\delta \,\!&amp;lt;/math&amp;gt; for the one-sided bounds. The variance of &amp;lt;math&amp;gt;\widehat{m}\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{m})= &amp;amp; {{\left( \frac{\partial m}{\partial C} \right)}^{2}}Var(\widehat{C})+{{\left( \frac{\partial m}{\partial B} \right)}^{2}}Var(\widehat{B}) +2\left( \frac{\partial m}{\partial C} \right)\left( \frac{\partial m}{\partial B} \right)Cov(\widehat{B},\widehat{C})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var(\widehat{m})={{e}^{\tfrac{2\widehat{B}}{V}}}\left[ Var(\widehat{C})+\frac{{{\widehat{C}}^{2}}}{{{V}^{2}}}Var(\widehat{B})+\frac{2\widehat{C}}{V}Cov(\widehat{B},\widehat{C}) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariance of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{B}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{C})\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{B}) &amp;amp; Cov(\widehat{B},\widehat{C})  \\&lt;br /&gt;
   Cov(\widehat{C},\widehat{B}) &amp;amp; Var(\widehat{C})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial C}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{C}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Confidence Bounds on Reliability====&lt;br /&gt;
The bounds on reliability for any given time, &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, are estimated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}(T)= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{U}}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}(T)= &amp;amp; {{e}^{-\tfrac{T}{{{m}_{L}}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{m}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{m}_{L}}\,\!&amp;lt;/math&amp;gt; are estimated estimated by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 &amp;amp; {{m}_{U}}= \widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}} \\ &lt;br /&gt;
 &amp;amp; {{m}_{L}}= \widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Confidence Bounds on Time====&lt;br /&gt;
The bounds on time (ML estimate of time) for a given reliability are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{T}=-\widehat{m}\cdot \ln (R)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding confidence bounds are then estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= -{{m}_{U}}\cdot \ln (R) \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= -{{m}_{L}}\cdot \ln (R)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{m}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{m}_{L}}\,\!&amp;lt;/math&amp;gt; are estimated estimated by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 &amp;amp; {{m}_{U}}= \widehat{m}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}} \\ &lt;br /&gt;
 &amp;amp; {{m}_{L}}= \widehat{m}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{m})}}{\widehat{m}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the Arrhenius-Weibull==&lt;br /&gt;
====Bounds on the Parameters====&lt;br /&gt;
From the asymptotically normal property of the maximum likelihood estimators, and since &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; are positive parameters, &amp;lt;math&amp;gt;\ln (\widehat{\beta }),\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\ln (\widehat{C})\,\!&amp;lt;/math&amp;gt; can then be treated as normally distributed. After performing this transformation, the bounds on the parameters can be estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\beta }_{U}}= \widehat{\beta }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{\beta }_{L}}= \widehat{\beta }\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
also:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{B}_{U}}= \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})} \\ &lt;br /&gt;
 &amp;amp; {{B}_{L}}= \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{B})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{C}_{U}}= \widehat{C}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}} \\ &lt;br /&gt;
 &amp;amp; {{C}_{L}}= \widehat{C}\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{\beta },\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C})\,\!&amp;lt;/math&amp;gt;, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var(\widehat{\beta }) &amp;amp; Cov(\widehat{\beta },\widehat{B}) &amp;amp; Cov(\widehat{\beta },\widehat{C})  \\&lt;br /&gt;
   Cov(\widehat{B},\widehat{\beta }) &amp;amp; Var(\widehat{B}) &amp;amp; Cov(\widehat{B},\widehat{C})  \\&lt;br /&gt;
   Cov(\widehat{C},\widehat{\beta }) &amp;amp; Cov(\widehat{C},\widehat{B}) &amp;amp; Var(\widehat{C})  \\&lt;br /&gt;
\end{matrix} \right]={{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\beta }^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial C}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial C}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial \beta } &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{C}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Confidence Bounds on Reliability====&lt;br /&gt;
The reliability function for the Arrhenius-Weibull model (ML estimate) is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-{{\left( \tfrac{T}{\widehat{C}\cdot {{e}^{\tfrac{\widehat{B}}{V}}}} \right)}^{\widehat{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T)={{e}^{-{{e}^{\ln \left[ {{\left( \tfrac{T}{\widehat{C}\cdot {{e}^{\tfrac{\widehat{B}}{V}}}} \right)}^{\widehat{\beta }}} \right]}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Setting: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\ln \left[ {{\left( \frac{T}{\widehat{C}\cdot {{e}^{\tfrac{\widehat{B}}{V}}}} \right)}^{\widehat{\beta }}} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\widehat{\beta }\left[ \ln (T)-\ln (\widehat{C})-\frac{\widehat{B}}{V} \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function now becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{R}(T,V)={{e}^{-{{e}^{\widehat{u}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to find the upper and lower bounds on &amp;lt;math&amp;gt;\widehat{u}\ \ :\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{u})= {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial B} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\partial \widehat{u}}{\partial C} \right)}^{2}}Var(\widehat{C}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial B} \right)Cov(\widehat{\beta },\widehat{B})+2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{\beta },\widehat{C}) +2\left( \frac{\partial \widehat{u}}{\partial B} \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{B},\widehat{C})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{u})= {{\left( \frac{\widehat{u}}{\widehat{\beta }} \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\widehat{\beta }}{V} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\widehat{\beta }}{\widehat{C}} \right)}^{2}}Var(\widehat{C}) -\frac{2\widehat{u}}{V}Cov(\widehat{\beta },\widehat{B})-\frac{2\widehat{u}}{\widehat{C}}Cov(\widehat{\beta },\widehat{C})+\frac{2{{\widehat{\beta }}^{2}}}{V\widehat{C}}Cov(\widehat{B},\widehat{C})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}(T,V)= {{e}^{-{{e}^{\left( {{u}_{L}} \right)}}}} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}(T,V)= {{e}^{-{{e}^{\left( {{u}_{U}} \right)}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Confidence Bounds on Time====&lt;br /&gt;
The bounds on time for a given reliability are estimated by first solving the reliability function with respect to time:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   \ln (R)&amp;amp;=  -{{\left( \frac{\widehat{T}}{\widehat{C}\cdot {{e}^{\tfrac{\widehat{B}}{V}}}} \right)}^{\widehat{\beta }}} \\ &lt;br /&gt;
  \ln (-\ln (R))&amp;amp;=  \widehat{\beta }\left( \ln \widehat{T}-\ln \widehat{C}-\frac{\widehat{B}}{V} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{u}=\frac{1}{\widehat{\beta }}\ln (-\ln (R))+\ln \widehat{C}+\frac{\widehat{B}}{V}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\widehat{u}=\ln \widehat{T}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; are estimated from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{U}}=\widehat{u}+{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{u}_{L}}=\widehat{u}-{{K}_{\alpha }}\sqrt{Var(\widehat{u})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{u})= &amp;amp; {{\left( \frac{\partial \widehat{u}}{\partial \beta } \right)}^{2}}Var(\widehat{\beta })+{{\left( \frac{\partial \widehat{u}}{\partial B} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\partial \widehat{u}}{\partial C} \right)}^{2}}Var(\widehat{C}) +2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial B} \right)Cov(\widehat{\beta },\widehat{B})+2\left( \frac{\partial \widehat{u}}{\partial \beta } \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{\beta },\widehat{C}) +2\left( \frac{\partial \widehat{u}}{\partial B} \right)\left( \frac{\partial \widehat{u}}{\partial C} \right)Cov(\widehat{B},\widehat{C})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{u})= &amp;amp; \frac{1}{{{\widehat{\beta }}^{4}}}{{\left[ \ln (-\ln (R)) \right]}^{2}}Var(\widehat{\beta })+\frac{1}{{{V}^{2}}}Var(\widehat{B})+\frac{1}{{{\widehat{C}}^{2}}}Var(\widehat{C})-\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}V}Cov(\widehat{\beta },\widehat{B})-\frac{2\ln (-\ln (R))}{{{\widehat{\beta }}^{2}}\widehat{C}}Cov(\widehat{\beta },\widehat{C}) +\frac{2}{V\widehat{C}}Cov(\widehat{B},\widehat{C})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on time can then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{{{u}_{U}}}} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{{{u}_{L}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Approximate Confidence Bounds for the Arrhenius-Lognormal==&lt;br /&gt;
====Bounds on the Parameters====&lt;br /&gt;
The lower and upper bounds on &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{B}_{U}}= \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{B}_{L}}= \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{B})}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the standard deviation, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;, and the parameter &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt; are positive parameters, &amp;lt;math&amp;gt;\ln ({{\widehat{\sigma }}_{{{T}&#039;}}})\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\ln (\widehat{C})\,\!&amp;lt;/math&amp;gt; are treated as normally distributed. The bounds are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{C}_{U}}= \widehat{C}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{C}_{L}}= \frac{\widehat{C}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{C})}}{\widehat{C}}}}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\sigma }_{U}}= {{\widehat{\sigma }}_{{{T}&#039;}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{\sigma }_{L}}= \frac{{{\widehat{\sigma }}_{{{T}&#039;}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma }}_{{{T}&#039;}}})}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}}}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The variances and covariances of &amp;lt;math&amp;gt;B,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt; are estimated from the local Fisher matrix (evaluated at &amp;lt;math&amp;gt;\widehat{B},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{C}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\widehat{\sigma }}_{{{T}&#039;}}}),\,\!&amp;lt;/math&amp;gt; as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\left[ \begin{matrix}&lt;br /&gt;
   Var\left( {{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) &amp;amp; Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{B} \right) &amp;amp; Var\left( \widehat{B} \right) &amp;amp; Cov\left( \widehat{B},\widehat{C} \right)  \\&lt;br /&gt;
   Cov\left( {{\widehat{\sigma }}_{{{T}&#039;}}},\widehat{C} \right) &amp;amp; Cov\left( \widehat{C},\widehat{B} \right) &amp;amp; Var\left( \widehat{C} \right)  \\&lt;br /&gt;
\end{matrix} \right]= {{\left[ \begin{matrix}&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma _{{{T}&#039;}}^{2}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\sigma }_{{{T}&#039;}}}\partial C}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial C}  \\&lt;br /&gt;
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial {{\sigma }_{{{T}&#039;}}}} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial C\partial B} &amp;amp; -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{C}^{2}}}  \\&lt;br /&gt;
\end{matrix} \right]}^{-1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Bounds on Reliability====&lt;br /&gt;
The reliability of the lognormal distribution is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({T}&#039;,V;B,C,{{\sigma }_{{{T}&#039;}}})=\int_{{{T}&#039;}}^{\infty }\frac{1}{{{\widehat{\sigma }}_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\ln (\widehat{C})-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}} \right)}^{2}}}}dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\widehat{z}(t,V;B,C,{{\sigma }_{T}})=\tfrac{t-\ln (\widehat{C})-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}},\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;\frac{d \widehat{z}}{dt}=\frac{1}{{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;t={T}&#039;\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\widehat{z}=\tfrac{{T}&#039;-\ln (\widehat{C})-\tfrac{\widehat{B}}{V}}{{{\widehat{\sigma }}_{{{T}&#039;}}}}\,\!&amp;lt;/math&amp;gt;, and for &amp;lt;math&amp;gt;t=\infty ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\widehat{z}=\infty .\,\!&amp;lt;/math&amp;gt; The above equation then becomes: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(\widehat{z})=\int_{\widehat{z}({T}&#039;)}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The bounds on &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; are estimated from: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{z}_{U}}= &amp;amp; \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\ &lt;br /&gt;
 &amp;amp; {{z}_{L}}= &amp;amp; \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Var(\widehat{z})=&amp;amp; \left( \frac{\partial \widehat{z}}{\partial B} \right)_{\widehat{B}}^{2}Var(\widehat{B})+\left( \frac{\partial \widehat{z}}{\partial C} \right)_{\widehat{C}}^{2}Var(\widehat{C})+\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)_{{{\widehat{\sigma }}_{{{T}&#039;}}}}^{2}Var({{\widehat{\sigma }}_{T}}) +2{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}{{\left( \frac{\partial \widehat{z}}{\partial C} \right)}_{\widehat{C}}}Cov\left( \widehat{B},\widehat{C} \right) \\ &lt;br /&gt;
 &amp;amp;  +2{{\left( \frac{\partial \widehat{z}}{\partial B} \right)}_{\widehat{B}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{B},{{\widehat{\sigma }}_{T}} \right) +2{{\left( \frac{\partial \widehat{z}}{\partial C} \right)}_{\widehat{C}}}{{\left( \frac{\partial \widehat{z}}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}_{{{\widehat{\sigma }}_{{{T}&#039;}}}}}Cov\left( \widehat{C},{{\widehat{\sigma }}_{T}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var(\widehat{z})= &amp;amp; \frac{1}{\widehat{\sigma }_{{{T}&#039;}}^{2}}[\frac{1}{{{V}^{2}}}Var(\widehat{B})+\frac{1}{{{C}^{2}}}Var(\widehat{C})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2}{C\cdot V}Cov\left( \widehat{B},\widehat{C} \right)+\frac{2\widehat{z}}{V}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)+\frac{2\widehat{z}}{C}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds on reliability are: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{U}}= &amp;amp; \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{R}_{L}}= &amp;amp; \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Confidence Bounds on Time====&lt;br /&gt;
The bounds around time, for a given lognormal percentile (unreliability), are estimated by first solving the reliability equation with respect to time, as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{T}&#039;(V;\widehat{B},\widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}})=\ln (\widehat{C})+\frac{\widehat{B}}{V}+z\cdot {{\widehat{\sigma }}_{{{T}&#039;}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  {T}&#039;(V;\widehat{B},\widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}})=&amp;amp;\ \ln (T) \\ &lt;br /&gt;
  z= &amp;amp; \ {{\Phi }^{-1}}\left[ F({T}&#039;) \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({T}&#039;)}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next step is to calculate the variance of &amp;lt;math&amp;gt;{T}&#039;(V;\widehat{B},\widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}}):\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  Var({T}&#039;)= &amp;amp; {{\left( \frac{\partial {T}&#039;}{\partial B} \right)}^{2}}Var(\widehat{B})+{{\left( \frac{\partial {T}&#039;}{\partial C} \right)}^{2}}Var(\widehat{C})+{{\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +2\left( \frac{\partial {T}&#039;}{\partial B} \right)\left( \frac{\partial {T}&#039;}{\partial C} \right)Cov\left( \widehat{B},\widehat{C} \right) \\ &lt;br /&gt;
 &amp;amp;  +2\left( \frac{\partial {T}&#039;}{\partial B} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +2\left( \frac{\partial {T}&#039;}{\partial C} \right)\left( \frac{\partial {T}&#039;}{\partial {{\sigma }_{{{T}&#039;}}}} \right)Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Var({T}&#039;)= \frac{1}{{{V}^{2}}}Var(\widehat{B})+\frac{1}{{{C}^{2}}}Var(\widehat{C})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma }}_{{{T}&#039;}}}) +\frac{2}{B\cdot C}Cov\left( \widehat{B},\widehat{C} \right) +\frac{2\widehat{z}}{V}Cov\left( \widehat{B},{{\widehat{\sigma }}_{{{T}&#039;}}} \right) +\frac{2\widehat{z}}{C}Cov\left( \widehat{C},{{\widehat{\sigma }}_{{{T}&#039;}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The upper and lower bounds are then found by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; T_{U}^{\prime }= &amp;amp; \ln {{T}_{U}}={T}&#039;+{{K}_{\alpha }}\sqrt{Var({T}&#039;)} \\ &lt;br /&gt;
 &amp;amp; T_{L}^{\prime }= &amp;amp; \ln {{T}_{L}}={T}&#039;-{{K}_{\alpha }}\sqrt{Var({T}&#039;)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving for &amp;lt;math&amp;gt;{{T}_{U}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{T}_{L}}\,\!&amp;lt;/math&amp;gt; yields:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{T}_{U}}= &amp;amp; {{e}^{T_{U}^{\prime }}}\text{ (Upper bound)} \\ &lt;br /&gt;
 &amp;amp; {{T}_{L}}= &amp;amp; {{e}^{T_{L}^{\prime }}}\text{ (Lower bound)}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Appendix_A:_Generating_Random_Numbers_from_a_Distribution&amp;diff=64924</id>
		<title>Appendix A: Generating Random Numbers from a Distribution</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Appendix_A:_Generating_Random_Numbers_from_a_Distribution&amp;diff=64924"/>
		<updated>2017-02-08T20:25:28Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Generating Random Times from a Weibull Distribution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:bsbook SUB|Appendix A|Generating Random Numbers from a Distribution}}&lt;br /&gt;
Simulation involves generating random numbers that belong to a specific distribution. We will illustrate this methodology using the Weibull distribution. &lt;br /&gt;
&lt;br /&gt;
=Generating Random Times from a Weibull Distribution=&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;cdf&#039;&#039; of the 2-parameter Weibull distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(T)=1-{{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 R(T)= &amp;amp; 1-F(t) \\ &lt;br /&gt;
 = &amp;amp; {{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To generate a random time from a Weibull distribution, with a given  &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;  and  &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;,  a uniform random number from 0 to 1,  &amp;lt;math&amp;gt;{{U}_{R}}[0,1]\,\!&amp;lt;/math&amp;gt; , is first obtained.  The random time from a weibull distribution is then obtained from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{T}_{R}}=\eta \cdot {{\left\{ -\ln \left[ {{U}_{R}}[0,1] \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Conditional==&lt;br /&gt;
The Weibull conditional reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t|T)=\frac{R(T+t)}{R(T)}=\frac{{{e}^{-{{\left( \tfrac{T+t}{\eta } \right)}^{\beta }}}}}{{{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=BlockSim&#039;s Random Number Generator (RNG)=&lt;br /&gt;
&lt;br /&gt;
Internally, ReliaSoft&#039;s BlockSim uses an algorithm based on L&#039;Ecuyer&#039;s [RefX] random number generator with a post Bays-Durham shuffle.  The RNG&#039;s period is approximately  10^18. The RNG passes all currently known statistical tests, within the limits the machine&#039;s precision, and for a number of calls (simulation runs) less than the period. If no seed is provided the algorithm uses the machines clock to initialize the RNG.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
#L&#039;Ecuyer, P., 1988, Communications of the ACM, vol. 31, pp.724-774&lt;br /&gt;
#L&#039;Ecuyer, P., 2001, Proceedings of the 2001 Winter Simulation Conference, pp.95-105&lt;br /&gt;
#William H., Teukolsky, Saul A., Vetterling, William T., Flannery, Brian R., Numerical Recipes in C: The Art of Scientific Computing, Second Edition, Cambridge University Press, 1988.&lt;br /&gt;
#Peters, Edgar E., Fractal Market Analysis: Applying Chaos Theory to Investment &amp;amp; Economics, John Wiley &amp;amp; Sons, 1994.&lt;br /&gt;
#Knuth, Donald E., The Art of Computer Programming: Volume 2 - Seminumerical Algorithms, Third Edition, Addison-Wesley, 1998.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Appendix_A:_Generating_Random_Numbers_from_a_Distribution&amp;diff=64923</id>
		<title>Appendix A: Generating Random Numbers from a Distribution</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Appendix_A:_Generating_Random_Numbers_from_a_Distribution&amp;diff=64923"/>
		<updated>2017-02-08T20:23:38Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional */ changed R(T,t) to R(t|T)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:bsbook SUB|Appendix A|Generating Random Numbers from a Distribution}}&lt;br /&gt;
Simulation involves generating random numbers that belong to a specific distribution. We will illustrate this methodology using the Weibull distribution. &lt;br /&gt;
&lt;br /&gt;
=Generating Random Times from a Weibull Distribution=&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;cdf&#039;&#039; of the 2-parameter Weibull distribution is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(T)=1-{{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Weibull reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 R(T)= &amp;amp; 1-F(t) \\ &lt;br /&gt;
 = &amp;amp; {{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To generate a random time from a Weibull distribution, with a given  &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;  and  &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;  a uniform random number from 0 to 1,  &amp;lt;math&amp;gt;{{U}_{R}}[0,1]\,\!&amp;lt;/math&amp;gt; , is first obtained.  The random time from a weibull distribution is then obtained from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{T}_{R}}=\eta \cdot {{\left\{ -\ln \left[ {{U}_{R}}[0,1] \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Conditional==&lt;br /&gt;
The Weibull conditional reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t|T)=\frac{R(T+t)}{R(T)}=\frac{{{e}^{-{{\left( \tfrac{T+t}{\eta } \right)}^{\beta }}}}}{{{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=BlockSim&#039;s Random Number Generator (RNG)=&lt;br /&gt;
&lt;br /&gt;
Internally, ReliaSoft&#039;s BlockSim uses an algorithm based on L&#039;Ecuyer&#039;s [RefX] random number generator with a post Bays-Durham shuffle.  The RNG&#039;s period is approximately  10^18. The RNG passes all currently known statistical tests, within the limits the machine&#039;s precision, and for a number of calls (simulation runs) less than the period. If no seed is provided the algorithm uses the machines clock to initialize the RNG.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
#L&#039;Ecuyer, P., 1988, Communications of the ACM, vol. 31, pp.724-774&lt;br /&gt;
#L&#039;Ecuyer, P., 2001, Proceedings of the 2001 Winter Simulation Conference, pp.95-105&lt;br /&gt;
#William H., Teukolsky, Saul A., Vetterling, William T., Flannery, Brian R., Numerical Recipes in C: The Art of Scientific Computing, Second Edition, Cambridge University Press, 1988.&lt;br /&gt;
#Peters, Edgar E., Fractal Market Analysis: Applying Chaos Theory to Investment &amp;amp; Economics, John Wiley &amp;amp; Sons, 1994.&lt;br /&gt;
#Knuth, Donald E., The Art of Computer Programming: Volume 2 - Seminumerical Algorithms, Third Edition, Addison-Wesley, 1998.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Time-Dependent_System_Reliability_for_Components_in_Parallel&amp;diff=64922</id>
		<title>Time-Dependent System Reliability for Components in Parallel</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Time-Dependent_System_Reliability_for_Components_in_Parallel&amp;diff=64922"/>
		<updated>2017-02-08T20:19:56Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: changed R(t,T) to R(t|T)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner BlockSim Examples}}&lt;br /&gt;
&#039;&#039;This example appears in the [[Time-Dependent_System_Reliability_(Analytical)#Examples|System Analysis Reference book]]&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Time-Dependent System Reliability for Components in Parallel&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
Consider the system shown next.&lt;br /&gt;
&amp;lt;!-- THE DATA SET IN THIS EXAMPLE IS ALSO USED IN ANOTHER EXAMPLE. IF YOU EDIT THE DATA SET ON THIS PAGE, YOU MUST ALSO EDIT THE DATA SET AND RESULTS IN THE PAGE: Example Using a Distribution to Approximate the CDF --&amp;gt;&lt;br /&gt;
[[Image:BS5.5.png|center|600px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Complex bridge system in Example 2. &amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
Components &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; through &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; are Weibull distributed with &amp;lt;math&amp;gt;\beta =1.2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta =1230\,\!&amp;lt;/math&amp;gt; hours.  The starting and ending blocks cannot fail.&lt;br /&gt;
&lt;br /&gt;
Determine the following:&lt;br /&gt;
&lt;br /&gt;
:*The reliability equation for the system and its corresponding plot.&lt;br /&gt;
&lt;br /&gt;
:*The system&#039;s &#039;&#039;pdf&#039;&#039; and its corresponding plot.&lt;br /&gt;
&lt;br /&gt;
:*The system&#039;s failure rate equation and the corresponding plot.&lt;br /&gt;
&lt;br /&gt;
:*The MTTF.&lt;br /&gt;
&lt;br /&gt;
:*The warranty time for a 90% reliability.&lt;br /&gt;
&lt;br /&gt;
:*The reliability for a 200-hour mission, if it is known that the system has already successfully operated for 200 hours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The first step is to obtain the reliability function for the system.  The methods described in the [[RBDs_and_Analytical_System_Reliability#Complex_Systems|RBDs and Analytical System Reliability]] chapter can be employed, such as the event space or path-tracing methods.  Using BlockSim, the following reliability equation is obtained:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{s}}(t)= &amp;amp; ({{R}_{Start}}\cdot {{R}_{End}}(2{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}-{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{B}}\cdot {{R}_{E}}-{{R}_{A}}\cdot {{R}_{C}}\cdot {{R}_{B}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}\cdot {{R}_{E}}+{{R}_{A}}\cdot {{R}_{C}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; +{{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}+{{R}_{A}}\cdot {{R}_{D}}+{{R}_{B}}\cdot {{R}_{E}}))  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that since the starting and ending blocks cannot fail, &amp;lt;math&amp;gt;{{R}_{Start}}=1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{R}_{End}}=1,\,\!&amp;lt;/math&amp;gt; the equation above can be reduced to:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{s}}(t)= &amp;amp; 2\cdot {{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}-{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{B}}\cdot {{R}_{E}}-{{R}_{A}}\cdot {{R}_{C}}\cdot {{R}_{B}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}\cdot {{R}_{E}}+{{R}_{A}}\cdot {{R}_{C}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; +{{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}+{{R}_{A}}\cdot {{R}_{D}}+{{R}_{B}}\cdot {{R}_{E}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{R}_{A}}\,\!&amp;lt;/math&amp;gt; is the reliability equation for Component A, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{A}}(t)={{e}^{-{{\left( \tfrac{t}{{{\eta }_{A}}} \right)}^{{{\beta }_{A}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{B}}\,\!&amp;lt;/math&amp;gt; is the reliability equation for Component &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since the components in this example are identical, the system reliability equation can be further reduced to:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{s}}(t)=2R{{(t)}^{2}}+2R{{(t)}^{3}}-5R{{(t)}^{4}}+2R{{(t)}^{5}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or, in terms of the failure distribution:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{s}}(t)=2\cdot {{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}-5\cdot {{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding plot is given in the following figure.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS5.6.png|center|650px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Reliability plot for the system. &amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
In order to obtain the system&#039;s &#039;&#039;pdf&#039;&#039;, the derivative of the reliability equation given above is taken with respect to time, resulting in: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{s}}(t)= &amp;amp; 4\cdot \frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}{{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+6\cdot \frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}{{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
&amp;amp; -20\cdot \frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}{{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+10\cdot \frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}{{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; can now be plotted for different time values, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, as shown in the following figure.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS5.7.png|center|650px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; &#039;&#039;pdf&#039;&#039; plot for the system.&amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
The system&#039;s failure rate can be obtained by dividing the system&#039;s &#039;&#039;pdf&#039;&#039;, given in equation above, by the system&#039;s reliability function given in&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{s}}(t)= &amp;amp; 2\cdot {{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}-5\cdot {{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}\end{align}\,\!&amp;lt;/math&amp;gt;, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{\lambda }_{s}}(t)= &amp;amp; \frac{4\cdot \tfrac{\beta }{\eta }{{\left( \tfrac{t}{\eta } \right)}^{\beta -1}}{{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+6\cdot \tfrac{\beta }{\eta }{{\left( \tfrac{t}{\eta } \right)}^{\beta -1}}{{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}}{2\cdot {{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}-5\cdot {{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}} \\ &lt;br /&gt;
&amp;amp; +\frac{-20\cdot \tfrac{\beta }{\eta }{{\left( \tfrac{t}{\eta } \right)}^{\beta -1}}{{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+10\cdot \tfrac{\beta }{\eta }{{\left( \tfrac{t}{\eta } \right)}^{\beta -1}}{{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}}{2\cdot {{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}-5\cdot {{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding plot is given below.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS5.8.png|center|650px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Failure rate for the system.&amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;MTTF\,\!&amp;lt;/math&amp;gt; of the system is obtained by integrating the system&#039;s reliability function given by &amp;lt;math&amp;gt;{{R}_{s}}(t)=2\cdot {{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}-5\cdot {{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt; from time zero to infinity, as given by &amp;lt;math&amp;gt;MTTF=\int_{0}^{\infty }{{R}_{s}}\left( t \right)dt   \ \,\!&amp;lt;/math&amp;gt;.  Using BlockSim&#039;s Analytical QCP, an &amp;lt;math&amp;gt;MTTF\,\!&amp;lt;/math&amp;gt; of 1007.8 hours is calculated, as shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:Fig 5.9.PNG|center|450px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; MTTF of the system. &amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The warranty time can be obtained by solving &amp;lt;math&amp;gt;{{R}_{s}}(t)\,\!&amp;lt;/math&amp;gt; with respect to time for a system reliability &amp;lt;math&amp;gt;{{R}_{s}}=0.9\,\!&amp;lt;/math&amp;gt;.  Using the Analytical QCP and selecting the &#039;&#039;&#039;Reliable Life&#039;&#039;&#039; option, a time of 372.72 hours is obtained, as shown in the following figure.&lt;br /&gt;
&lt;br /&gt;
[[Image:Fig 5.10.PNG|center|450px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Time at which &#039;&#039;R&#039;&#039;=0.9 or 90% for the system.&amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lastly, the conditional reliability can be obtained using &amp;lt;math&amp;gt;R(t|T)=\frac{R(T+t)}{R(T)}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{R}_{s}}(t)=2\cdot {{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}-5\cdot {{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;, or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(200|200)= &amp;amp; \frac{R(400)}{R(200)} \\ &lt;br /&gt;
= &amp;amp; \frac{0.883825}{0.975321} \\ &lt;br /&gt;
= &amp;amp; 0.906189  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be calculated using BlockSim&#039;s Analytical QCP, as shown below.&lt;br /&gt;
&lt;br /&gt;
[[Image:Fig 5.11.PNG|center|450px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt;  Conditional reliability calculation for the system.&amp;lt;/div&amp;gt;|link=]]&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Time-Dependent_System_Reliability_for_Components_in_Parallel&amp;diff=64921</id>
		<title>Time-Dependent System Reliability for Components in Parallel</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Time-Dependent_System_Reliability_for_Components_in_Parallel&amp;diff=64921"/>
		<updated>2017-02-08T20:17:45Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner BlockSim Examples}}&lt;br /&gt;
&#039;&#039;This example appears in the [[Time-Dependent_System_Reliability_(Analytical)#Examples|System Analysis Reference book]]&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Time-Dependent System Reliability for Components in Parallel&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
Consider the system shown next.&lt;br /&gt;
&amp;lt;!-- THE DATA SET IN THIS EXAMPLE IS ALSO USED IN ANOTHER EXAMPLE. IF YOU EDIT THE DATA SET ON THIS PAGE, YOU MUST ALSO EDIT THE DATA SET AND RESULTS IN THE PAGE: Example Using a Distribution to Approximate the CDF --&amp;gt;&lt;br /&gt;
[[Image:BS5.5.png|center|600px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Complex bridge system in Example 2. &amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
Components &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; through &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; are Weibull distributed with &amp;lt;math&amp;gt;\beta =1.2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta =1230\,\!&amp;lt;/math&amp;gt; hours.  The starting and ending blocks cannot fail.&lt;br /&gt;
&lt;br /&gt;
Determine the following:&lt;br /&gt;
&lt;br /&gt;
:*The reliability equation for the system and its corresponding plot.&lt;br /&gt;
&lt;br /&gt;
:*The system&#039;s &#039;&#039;pdf&#039;&#039; and its corresponding plot.&lt;br /&gt;
&lt;br /&gt;
:*The system&#039;s failure rate equation and the corresponding plot.&lt;br /&gt;
&lt;br /&gt;
:*The MTTF.&lt;br /&gt;
&lt;br /&gt;
:*The warranty time for a 90% reliability.&lt;br /&gt;
&lt;br /&gt;
:*The reliability for a 200-hour mission, if it is known that the system has already successfully operated for 200 hours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The first step is to obtain the reliability function for the system.  The methods described in the [[RBDs_and_Analytical_System_Reliability#Complex_Systems|RBDs and Analytical System Reliability]] chapter can be employed, such as the event space or path-tracing methods.  Using BlockSim, the following reliability equation is obtained:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{s}}(t)= &amp;amp; ({{R}_{Start}}\cdot {{R}_{End}}(2{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}-{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{B}}\cdot {{R}_{E}}-{{R}_{A}}\cdot {{R}_{C}}\cdot {{R}_{B}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}\cdot {{R}_{E}}+{{R}_{A}}\cdot {{R}_{C}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; +{{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}+{{R}_{A}}\cdot {{R}_{D}}+{{R}_{B}}\cdot {{R}_{E}}))  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that since the starting and ending blocks cannot fail, &amp;lt;math&amp;gt;{{R}_{Start}}=1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{R}_{End}}=1,\,\!&amp;lt;/math&amp;gt; the equation above can be reduced to:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{s}}(t)= &amp;amp; 2\cdot {{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}-{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{A}}\cdot {{R}_{D}}\cdot {{R}_{B}}\cdot {{R}_{E}}-{{R}_{A}}\cdot {{R}_{C}}\cdot {{R}_{B}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}\cdot {{R}_{E}}+{{R}_{A}}\cdot {{R}_{C}}\cdot {{R}_{E}} \\ &lt;br /&gt;
&amp;amp; +{{R}_{D}}\cdot {{R}_{C}}\cdot {{R}_{B}}+{{R}_{A}}\cdot {{R}_{D}}+{{R}_{B}}\cdot {{R}_{E}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{R}_{A}}\,\!&amp;lt;/math&amp;gt; is the reliability equation for Component A, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{A}}(t)={{e}^{-{{\left( \tfrac{t}{{{\eta }_{A}}} \right)}^{{{\beta }_{A}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{B}}\,\!&amp;lt;/math&amp;gt; is the reliability equation for Component &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since the components in this example are identical, the system reliability equation can be further reduced to:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{s}}(t)=2R{{(t)}^{2}}+2R{{(t)}^{3}}-5R{{(t)}^{4}}+2R{{(t)}^{5}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or, in terms of the failure distribution:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{s}}(t)=2\cdot {{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}-5\cdot {{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding plot is given in the following figure.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS5.6.png|center|650px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Reliability plot for the system. &amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
In order to obtain the system&#039;s &#039;&#039;pdf&#039;&#039;, the derivative of the reliability equation given above is taken with respect to time, resulting in: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{s}}(t)= &amp;amp; 4\cdot \frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}{{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+6\cdot \frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}{{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
&amp;amp; -20\cdot \frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}{{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+10\cdot \frac{\beta }{\eta }{{\left( \frac{t}{\eta } \right)}^{\beta -1}}{{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;pdf&#039;&#039; can now be plotted for different time values, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, as shown in the following figure.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS5.7.png|center|650px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; &#039;&#039;pdf&#039;&#039; plot for the system.&amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
The system&#039;s failure rate can be obtained by dividing the system&#039;s &#039;&#039;pdf&#039;&#039;, given in equation above, by the system&#039;s reliability function given in&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{s}}(t)= &amp;amp; 2\cdot {{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}-5\cdot {{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}\end{align}\,\!&amp;lt;/math&amp;gt;, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{\lambda }_{s}}(t)= &amp;amp; \frac{4\cdot \tfrac{\beta }{\eta }{{\left( \tfrac{t}{\eta } \right)}^{\beta -1}}{{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+6\cdot \tfrac{\beta }{\eta }{{\left( \tfrac{t}{\eta } \right)}^{\beta -1}}{{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}}{2\cdot {{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}-5\cdot {{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}} \\ &lt;br /&gt;
&amp;amp; +\frac{-20\cdot \tfrac{\beta }{\eta }{{\left( \tfrac{t}{\eta } \right)}^{\beta -1}}{{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+10\cdot \tfrac{\beta }{\eta }{{\left( \tfrac{t}{\eta } \right)}^{\beta -1}}{{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}}{2\cdot {{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}-5\cdot {{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding plot is given below.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS5.8.png|center|650px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Failure rate for the system.&amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;MTTF\,\!&amp;lt;/math&amp;gt; of the system is obtained by integrating the system&#039;s reliability function given by &amp;lt;math&amp;gt;{{R}_{s}}(t)=2\cdot {{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}-5\cdot {{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt; from time zero to infinity, as given by &amp;lt;math&amp;gt;MTTF=\int_{0}^{\infty }{{R}_{s}}\left( t \right)dt   \ \,\!&amp;lt;/math&amp;gt;.  Using BlockSim&#039;s Analytical QCP, an &amp;lt;math&amp;gt;MTTF\,\!&amp;lt;/math&amp;gt; of 1007.8 hours is calculated, as shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:Fig 5.9.PNG|center|450px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; MTTF of the system. &amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The warranty time can be obtained by solving &amp;lt;math&amp;gt;{{R}_{s}}(t)\,\!&amp;lt;/math&amp;gt; with respect to time for a system reliability &amp;lt;math&amp;gt;{{R}_{s}}=0.9\,\!&amp;lt;/math&amp;gt;.  Using the Analytical QCP and selecting the &#039;&#039;&#039;Reliable Life&#039;&#039;&#039; option, a time of 372.72 hours is obtained, as shown in the following figure.&lt;br /&gt;
&lt;br /&gt;
[[Image:Fig 5.10.PNG|center|450px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Time at which &#039;&#039;R&#039;&#039;=0.9 or 90% for the system.&amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lastly, the conditional reliability can be obtained using &amp;lt;math&amp;gt;R(T,t)=\frac{R(T+t)}{R(T)}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{R}_{s}}(t)=2\cdot {{e}^{-2{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-3{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}-5\cdot {{e}^{-4{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}+2\cdot {{e}^{-5{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;, or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(200,200)= &amp;amp; \frac{R(400)}{R(200)} \\ &lt;br /&gt;
= &amp;amp; \frac{0.883825}{0.975321} \\ &lt;br /&gt;
= &amp;amp; 0.906189  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be calculated using BlockSim&#039;s Analytical QCP, as shown below.&lt;br /&gt;
&lt;br /&gt;
[[Image:Fig 5.11.PNG|center|450px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt;  Conditional reliability calculation for the system.&amp;lt;/div&amp;gt;|link=]]&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Time-Dependent_System_Reliability_(Analytical)&amp;diff=64920</id>
		<title>Time-Dependent System Reliability (Analytical)</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Time-Dependent_System_Reliability_(Analytical)&amp;diff=64920"/>
		<updated>2017-02-08T20:09:12Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional Reliability */ changed R(T,t) to R(t|T)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:bsbook|4}}&lt;br /&gt;
In the [[RBDs and Analytical System Reliability]] chapter, different system configuration types were examined, as well as different methods for obtaining the system&#039;s reliability function analytically.  Because the reliabilities in the problems presented were treated as probabilities (e.g., &amp;lt;math&amp;gt;P(A)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{R}_{i}}\,\!&amp;lt;/math&amp;gt; ), the reliability values and equations presented were referred to as &#039;&#039;static&#039;&#039; (not time-dependent). Thus, in the prior chapter, the life distributions of the components were not incorporated in the process of calculating the system reliability. In this chapter, time dependency in the reliability function will be introduced. We will develop the models necessary to observe the reliability over the life of the system, instead of at just one point in time. In addition, performance measures such as failure rate, MTTF and warranty time will be estimated for the entire system. The methods of obtaining the reliability function analytically remain identical to the ones presented in the previous chapter, with the exception that the reliabilities will be functions of time.  In other words, instead of dealing with &amp;lt;math&amp;gt;{{R}_{i}}\,\!&amp;lt;/math&amp;gt;, we will use &amp;lt;math&amp;gt;{{R}_{i}}(t)\,\!&amp;lt;/math&amp;gt;.  All examples in this chapter assume that no repairs are performed on the components. Repairable systems analysis will be introduced in a [[Introduction_to_Repairable_Systems|subsequent chapter]].  &lt;br /&gt;
&lt;br /&gt;
==Analytical Life Predictions==&lt;br /&gt;
The analytical approach presented in the prior chapter involved the determination of a mathematical expression that describes the reliability of the system, expressed in terms of the reliabilities of its components.  So far we have estimated only static system reliability (at a fixed time).  For example, in the case of a system with three components in series, the system&#039;s reliability equation was given by:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{s}}={{R}_{1}}\cdot {{R}_{2}}\cdot {{R}_{3}}  \ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The values of &amp;lt;math&amp;gt;{{R}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{R}_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{R}_{3}}\,\!&amp;lt;/math&amp;gt; were given for a common time and the reliability of the system was estimated for that time.  However, since the component failure characteristics can be described by distributions, the system reliability is actually time-dependent.  In this case, the equation above can be rewritten as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{s}}(t)={{R}_{1}}(t)\cdot {{R}_{2}}(t)\cdot {{R}_{3}}(t)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability of the system for any mission time can now be estimated.  Assuming a Weibull life distribution for each component, the first equation above can now be expressed in terms of each component&#039;s reliability function, or:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{s}}(t)={{e}^{-{{\left( \tfrac{t}{{{\eta }_{1}}} \right)}^{{{\beta }_{1}}}}}}\cdot {{e}^{-{{\left( \tfrac{t}{{{\eta }_{2}}} \right)}^{{{\beta }_{2}}}}}}\cdot {{e}^{-{{\left( \tfrac{t}{{{\eta }_{3}}} \right)}^{{{\beta }_{3}}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the same manner, any life distribution can be substituted into the system reliability equation.  Suppose that the times-to-failure of the first component are described with a Weibull distribution, the times-to-failure of the second component with an exponential distribution and the times-to-failure of the third component with a normal distribution.  Then the first equation above can be written as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{s}}(t)={{e}^{-{{\left( \tfrac{t}{{{\eta }_{1}}} \right)}^{{{\beta }_{1}}}}}}\cdot {{e}^{-{{\lambda }_{2}}t}}\cdot \left[ 1-\Phi \left( \frac{t-{{\mu }_{3}}}{{{\sigma }_{3}}} \right) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be seen that the biggest challenge is in obtaining the system&#039;s reliability function in terms of component reliabilities, which has already been discussed in depth.  Once this has been achieved, calculating the reliability of the system for any mission duration is just a matter of substituting the corresponding component reliability functions into the system reliability equation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Advantages and Disadvantages===&lt;br /&gt;
The primary advantage of the analytical solution is that it produces a mathematical expression that describes the reliability of the system.  Once the system&#039;s reliability function has been determined, other calculations can then be performed to obtain metrics of interest for the system. Such calculations include:  &lt;br /&gt;
&lt;br /&gt;
:*Determination of the system&#039;s &#039;&#039;pdf&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
:*Determination of warranty periods.&lt;br /&gt;
&lt;br /&gt;
:*Determination of the system&#039;s failure rate.&lt;br /&gt;
&lt;br /&gt;
:*Determination of the system&#039;s MTTF.&lt;br /&gt;
&lt;br /&gt;
In addition, optimization and reliability allocation techniques can be used to aid engineers in their design improvement efforts.  Another advantage in using analytical techniques is the ability to perform static calculations and analyze systems with a mixture of static and time-dependent components.  Finally, the reliability importance of components over time can be calculated with this methodology.&lt;br /&gt;
&lt;br /&gt;
The biggest disadvantage of the analytical method is that formulations can become very complicated.  The more complicated a system is, the larger and more difficult it will be to analytically formulate an expression for the system&#039;s reliability.  For particularly detailed systems this process can be quite time-consuming, even with the use of computers.  Furthermore, when the maintainability of the system or some of its components must be taken into consideration, analytical solutions become intractable.  In these situations, the use of simulation methods may be more advantageous than attempting to develop a solution analytically.  Simulation methods are presented in later chapters.&lt;br /&gt;
&lt;br /&gt;
==Looking at a Simple &amp;quot;Complex&amp;quot; System Analytically==&lt;br /&gt;
The complexity involved in an analytical solution can be best illustrated by looking at the simple &#039;&#039;complex&#039;&#039; system with 15 components, as shown below.&lt;br /&gt;
&lt;br /&gt;
[[Image:5-1.png|center|600px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; An RBD of a complex system.&amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
The system reliability for this system (computed using BlockSim) is shown next.  The first solution is provided using BlockSim&#039;s symbolic solution.  In symbolic mode, BlockSim breaks the equation into segments, identified by tokens, that need to be substituted into the final system equation for a complete solution.  This creates algebraic solutions that are more compact than if the substitutions were made.&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{System}}= &amp;amp; D2\cdot D3\cdot {{R}_{L}} \\ &lt;br /&gt;
D3= &amp;amp; +{{R}_{K}}\cdot IK \\ &lt;br /&gt;
IK= &amp;amp; +{{R}_{I}}\cdot {{R}_{J}}\cdot {{R}_{O}}\cdot {{R}_{G}}\cdot {{R}_{F}}\cdot {{R}_{H}}-{{R}_{I}}\cdot {{R}_{J}}\cdot {{R}_{O}}\cdot {{R}_{G}}\cdot {{R}_{F}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{I}}\cdot {{R}_{J}}\cdot {{R}_{F}}\cdot {{R}_{H}}-{{R}_{I}}\cdot {{R}_{O}}\cdot {{R}_{F}}\cdot {{R}_{H}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{J}}\cdot {{R}_{G}}\cdot {{R}_{F}}\cdot {{R}_{H}}+{{R}_{I}}\cdot {{R}_{O}}\cdot {{R}_{F}} \\ &lt;br /&gt;
&amp;amp; +{{R}_{I}}\cdot {{R}_{F}}\cdot {{R}_{H}}+{{R}_{J}}\cdot {{R}_{F}}\cdot {{R}_{H}}+{{R}_{J}}\cdot {{R}_{G}} \\ &lt;br /&gt;
D2 = &amp;amp; +{{R}_{A}}\cdot {{R}_{E}}\cdot IE \\ &lt;br /&gt;
IE = &amp;amp; -D1\cdot {{R}_{M}}\cdot {{R}_{N}}+{{R}_{M}}\cdot {{R}_{N}}+D1 \\ &lt;br /&gt;
D1 = &amp;amp; +{{R}_{D}}\cdot ID \\ &lt;br /&gt;
ID = &amp;amp; -{{R}_{B}}\cdot {{R}_{C}}+{{R}_{B}}+{{R}_{C}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting the terms yields: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{System}}= &amp;amp; {{R}_{A}}\cdot {{R}_{E}}\cdot {{R}_{L}}\cdot {{R}_{K}} \\ &lt;br /&gt;
&amp;amp; \cdot \{({{R}_{D}}\cdot {{R}_{B}}\cdot {{R}_{C}}+{{R}_{B}}+{{R}_{C}})\cdot {{R}_{M}}\cdot {{R}_{N}} \\ &lt;br /&gt;
&amp;amp; +{{R}_{M}}\cdot {{R}_{N}}-{{R}_{D}}\cdot {{R}_{B}}\cdot {{R}_{C}}+{{R}_{B}}+{{R}_{C}}\} \\ &lt;br /&gt;
&amp;amp; \cdot \{{{R}_{I}}\cdot {{R}_{J}}\cdot {{R}_{O}}\cdot {{R}_{G}}\cdot {{R}_{F}}\cdot {{R}_{H}}-{{R}_{I}}\cdot {{R}_{J}}\cdot {{R}_{O}}\cdot {{R}_{G}}\cdot {{R}_{F}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{I}}\cdot {{R}_{J}}\cdot {{R}_{F}}\cdot {{R}_{H}}-{{R}_{I}}\cdot {{R}_{O}}\cdot {{R}_{F}}\cdot {{R}_{H}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{J}}\cdot {{R}_{G}}\cdot {{R}_{F}}\cdot {{R}_{H}}+{{R}_{I}}\cdot {{R}_{O}}\cdot {{R}_{F}} \\ &lt;br /&gt;
&amp;amp; +{{R}_{I}}\cdot {{R}_{F}}\cdot {{R}_{H}}+{{R}_{J}}\cdot {{R}_{F}}\cdot {{R}_{H}}+{{R}_{J}}\cdot {{R}_{G}}\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
BlockSim&#039;s automatic algebraic simplification would yield the following format for the above solution: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{System}}= &amp;amp; (({{R}_{A}}\cdot {{R}_{E}}(-({{R}_{D}}(-{{R}_{B}}\cdot {{R}_{C}}+{{R}_{B}}+{{R}_{C}})){{R}_{M}}\cdot {{R}_{N}} \\ &lt;br /&gt;
&amp;amp; +{{R}_{M}}\cdot {{R}_{N}} \\ &lt;br /&gt;
&amp;amp; +({{R}_{D}}(-{{R}_{B}}\cdot {{R}_{C}}+{{R}_{B}}+{{R}_{C}})))) \\ &lt;br /&gt;
&amp;amp; ({{R}_{K}}({{R}_{I}}\cdot {{R}_{J}}\cdot {{R}_{O}}\cdot {{R}_{G}}\cdot {{R}_{F}}\cdot {{R}_{H}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{I}}\cdot {{R}_{J}}\cdot {{R}_{O}}\cdot {{R}_{G}}\cdot {{R}_{F}}-{{R}_{I}}\cdot {{R}_{J}}\cdot {{R}_{F}}\cdot {{R}_{H}} \\ &lt;br /&gt;
&amp;amp; -{{R}_{I}}\cdot {{R}_{O}}\cdot {{R}_{F}}\cdot {{R}_{H}}-{{R}_{J}}\cdot {{R}_{G}}\cdot {{R}_{F}}\cdot {{R}_{H}} \\ &lt;br /&gt;
&amp;amp; +RI\cdot {{R}_{O}}\cdot {{R}_{F}} \\ &lt;br /&gt;
&amp;amp; +{{R}_{I}}\cdot {{R}_{F}}\cdot {{R}_{H}}+{{R}_{J}}\cdot {{R}_{F}}\cdot {{R}_{H}}+{{R}_{J}}\cdot {{R}_{G}})){{R}_{L}})  \ &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this equation, each &amp;lt;math&amp;gt;{{R}_{i}}\,\!&amp;lt;/math&amp;gt; represents the reliability function of a block.  For example, if &amp;lt;math&amp;gt;{{R}_{A}}\,\!&amp;lt;/math&amp;gt; has a Weibull distribution, then each &amp;lt;math&amp;gt;{{R}_{A}}(t)={{e}^{-{{\left( \tfrac{t}{{{\eta }_{A}}} \right)}^{{{\beta }_{A}}}}}}\,\!&amp;lt;/math&amp;gt; and so forth.  Substitution of each component&#039;s reliability function in the last &amp;lt;math&amp;gt;{{R}_{System}}\,\!&amp;lt;/math&amp;gt; equation above will result in an analytical expression for the system reliability as a function of time, or &amp;lt;math&amp;gt;{{R}_{s}}(t)\,\!&amp;lt;/math&amp;gt;, which is the same as &amp;lt;math&amp;gt;(1-cd{{f}_{System}}).\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Obtaining Other Functions of Interest==&lt;br /&gt;
Once the system reliability equation (or the cumulative density function, &#039;&#039;cdf&#039;&#039;) has been determined, other functions and metrics of interest can be derived.  &lt;br /&gt;
&lt;br /&gt;
Consider the following simple system:&lt;br /&gt;
&lt;br /&gt;
[[Image:5-2.png|center|200px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Simple two-component system. &amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
Furthermore, assume that component 1 follows an exponential distribution with a mean of 10,000 (&amp;lt;math&amp;gt;\mu =10,000,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\lambda =1/10,000)\,\!&amp;lt;/math&amp;gt; and component 2 follows a Weibull distribution with &amp;lt;math&amp;gt;\beta =6\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta =10,000\,\!&amp;lt;/math&amp;gt;.  The reliability equation of this system is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{S}}(t)= &amp;amp; {{R}_{1}}(t)\cdot {{R}_{2}}(t) \\ &lt;br /&gt;
= &amp;amp; {{e}^{-\lambda t}}\cdot {{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \\ &lt;br /&gt;
= &amp;amp; {{e}^{-\tfrac{1}{10,000}t}}\cdot {{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}}  \  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The system &#039;&#039;cdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{F}_{S}}(t)= &amp;amp; 1-({{R}_{1}}(t)\cdot {{R}_{2}}(t)) \\ &lt;br /&gt;
= &amp;amp; 1-\left( {{e}^{-\lambda t}}\cdot {{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} \right) \\ &lt;br /&gt;
= &amp;amp; 1-\left( {{e}^{-\tfrac{1}{10,000}t}}\cdot {{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}} \right)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System &#039;&#039;pdf&#039;&#039;===&lt;br /&gt;
Once the equation for the reliability of the system has been obtained, the system&#039;s &#039;&#039;pdf&#039;&#039; can be determined. The  &#039;&#039;pdf&#039;&#039; is the derivative of the reliability function with respect to time or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{f}_{s}}(t)=-\frac{d({{R}_{s}}(t))}{dt} \ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the system shown above, this is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{f}_{s}}(t)= &amp;amp; -\frac{d}{dt}\left( {{e}^{-\tfrac{1}{10,000}t}}\cdot {{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}} \right) \\ &lt;br /&gt;
= &amp;amp; -\frac{d}{dt}\left( {{e}^{-\tfrac{1}{10,000}t}} \right)\cdot {{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}} \\ &lt;br /&gt;
&amp;amp; +{{e}^{-\tfrac{1}{10,000}t}}\left[ -\frac{d}{dt}\left( {{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}} \right) \right] \\ &lt;br /&gt;
= &amp;amp; {{f}_{1}}(t)\cdot {{R}_{2}}(t)+{{f}_{2}}(t)\cdot {{R}_{1}}(t)  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next figure shows a plot of the &#039;&#039;pdf&#039;&#039; equation.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS5.3.png|center|650px|pdf plot of the two-component system|link=]]&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability===&lt;br /&gt;
Conditional reliability is the probability of a system successfully completing another mission following the successful completion of a previous mission.  The time of the previous mission and the time for the mission to be undertaken must be taken into account for conditional reliability calculations.  The system&#039;s conditional reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t|T)=\frac{R(T+t)}{R(T)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Equation above gives the reliability for a new mission of duration &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; having already accumulated &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; hours of operation up to the start of this new mission. The system is evaluated to assure that it will start the next mission successfully.&lt;br /&gt;
&lt;br /&gt;
For the simple two-component system, the reliability for mission of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; = 1000 hours, having an age of &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; = 500 hours, is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{S}}(t=1000|T=500)= &amp;amp; \frac{R(T+t)}{R(T)} \\ &lt;br /&gt;
= &amp;amp; \frac{R(1500)}{R(500)} \\ &lt;br /&gt;
= &amp;amp; \frac{{{e}^{-\tfrac{1500}{10,000}}}\cdot {{e}^{-{{\left( \tfrac{1500}{10,000} \right)}^{6}}}}}{{{e}^{-\tfrac{500}{10,000}t}}\cdot {{e}^{-{{\left( \tfrac{500}{10,000} \right)}^{6}}}}} \\ &lt;br /&gt;
= &amp;amp; 0.9048=90.48%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability for Components===&lt;br /&gt;
Now in this formulation, it was assumed that the accumulated age was equivalent for both units. That is, both started life at zero and aged to 500.  It is possible to consider an individual component that has already accumulated some age (used component) in the same formulation.  To illustrate this, assume that component 2 started life with an age of T = 100.  Then the reliability equation of the system, as given in &amp;lt;math&amp;gt;{{R}_{S}}(t)= {{R}_{1}}(t)\cdot {{R}_{2}}(t)\,\!&amp;lt;/math&amp;gt;, would need to be modified to include a conditional term for 2, or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{S}}(t)={{R}_{1}}(t)\cdot \frac{{{R}_{2}}({{T}_{2}}+t)}{{{R}_{2}}({{T}_{2}})} \ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In BlockSim, the start age input box may be used to specify a starting age greater than zero.&lt;br /&gt;
&lt;br /&gt;
===System Failure Rate===&lt;br /&gt;
Once the distribution of the system has been determined, the failure rate can also be obtained by dividing the &#039;&#039;pdf&#039;&#039; by the reliability function:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\lambda }_{s}}\left( t \right)=\frac{{{f}_{s}}\left( t \right)}{{{R}_{s}}\left( t \right)}   \ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the simple two-component system: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{\lambda }_{s}}\left( t \right)= &amp;amp; \frac{-\tfrac{d}{dt}\left( {{e}^{-\tfrac{1}{10,000}t}}\cdot {{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}} \right)}{{{e}^{-\tfrac{1}{10,000}t}}\cdot {{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{-\tfrac{d}{dt}\left( {{e}^{-\tfrac{1}{10,000}t}} \right)}{{{e}^{-\tfrac{1}{10,000}t}}}+\frac{-\tfrac{d}{dt}\left( {{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}} \right)}{{{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}}} \\ &lt;br /&gt;
= &amp;amp; \frac{{{f}_{1}}}{{{R}_{1}}}+\frac{{{f}_{2}}}{{{R}_{2}}} \\ &lt;br /&gt;
= &amp;amp; {{\lambda }_{1}}+{{\lambda }_{2}}   \ &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following figure shows a plot of the equation.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS5.4.png|center|650px|Failure rate function plot of the two component system|link=]]&lt;br /&gt;
&lt;br /&gt;
BlockSim uses numerical methods to estimate the failure rate.  It should be pointed out that as &amp;lt;math&amp;gt;t\to \infty \,\!&amp;lt;/math&amp;gt;, numerical evaluation of the first equation above is constrained by machine numerical precision. That is, there are limits as to how large &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; can get before floating point problems arise.  For example, at &amp;lt;math&amp;gt;t=5,000,000\,\!&amp;lt;/math&amp;gt; both numerator and denominator will tend to zero (e.g., &amp;lt;math&amp;gt;{{e}^{-\tfrac{5,000,000}{10,000}}}=7.1245\times {{10}^{-218}}\,\!&amp;lt;/math&amp;gt; ).  As these numbers become very small they will start looking like a zero to the computer, or cause a floating point error, resulting in a &amp;lt;math&amp;gt;\tfrac{0}{0}\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\tfrac{X}{0}\,\!&amp;lt;/math&amp;gt; operation.  In these cases, BlockSim will return a value of &amp;quot;&amp;lt;math&amp;gt;N/A\,\!&amp;lt;/math&amp;gt;&amp;quot; for the result.  Obviously, this does not create any practical constraints.&lt;br /&gt;
&lt;br /&gt;
===System Mean Life (Mean Time To Failure)===&lt;br /&gt;
The mean life (or mean time to failure, MTTF) can be obtained by integrating the system reliability function from zero to infinity: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MTTF=\int_{0}^{\infty }{{R}_{s}}\left( t \right)dt   \ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The mean time is a performance index and does not provide any information about the behavior of the failure distribution of the system.&lt;br /&gt;
&lt;br /&gt;
For the simple two-component system: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
MTTF= &amp;amp; \int_{0}^{\infty }\left( {{e}^{-\tfrac{1}{10,000}t}}\cdot {{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}} \right)dt \\ &lt;br /&gt;
= &amp;amp; 5978.9  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Warranty Period and BX Life===&lt;br /&gt;
Sometimes it is desirable to know the time value associated with a certain reliability.  Warranty periods are often calculated by determining what percentage of the failure population can be covered financially and estimating the time at which this portion of the population will fail.  Similarly, engineering specifications may call for a certain BX life, which also represents a time period during which a certain proportion of the population will fail.  For example, the B10 life is the time in which 10% of the population will fail.  &lt;br /&gt;
This is obtained by setting &amp;lt;math&amp;gt;{{R}_{S}}(t)\,\!&amp;lt;/math&amp;gt; to the desired value and solving for &amp;lt;math&amp;gt;t.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
For the simple two-component system: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{s}}\left( t \right)={{e}^{-\tfrac{1}{10,000}t}}\cdot {{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compute the time by which reliability would be equal to 90%, equation above is recast as follows and solved for &amp;lt;math&amp;gt;t.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0.90={{e}^{-\tfrac{1}{10,000}t}}\cdot {{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;t=1053.59\,\!&amp;lt;/math&amp;gt;.  Equivalently, the B10 life for this system is also 1053.59.&lt;br /&gt;
Except for some trivial cases, a closed form solution for &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; cannot be obtained.   Thus, it is necessary to solve for &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; using numerical methods.  BlockSim uses numerical methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Examples===&amp;lt;!-- THIS SECTION HEADER IS LINKED TO THE PAGES THAT ARE TRANSCLUDED IN THIS SECTION. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Components in Series&#039;&#039;&#039;&amp;lt;hr&amp;gt;&lt;br /&gt;
{{:Time-Dependent System Reliability for Components in Series}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Components in Parallel&#039;&#039;&#039;&amp;lt;hr&amp;gt;&lt;br /&gt;
{{:Time-Dependent System Reliability for Components in Parallel}}&lt;br /&gt;
&lt;br /&gt;
==Approximating the System cdf==&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Example Using a Distribution to Approximate the CDF. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
In many cases, it is valuable to fit a distribution that represents the system&#039;s times-to-failure.  This can be useful when the system is part of a larger assembly and may be used for repeated calculations or in calculations for other systems.  In cases such as this, it can be useful to characterize the system&#039;s behavior by fitting a distribution to the overall system and calculating parameters for this distribution.   This is equivalent to fitting a single distribution to describe &amp;lt;math&amp;gt;{{R}_{S}}(t)\,\!&amp;lt;/math&amp;gt;.  In essence, it is like reducing the entire system to a component in order to simplify calculations.  &lt;br /&gt;
&lt;br /&gt;
For the system shown below: &lt;br /&gt;
[[Image:5-2.png|center|250px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt;Simple two-component system. &amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{S}}(t)={{e}^{-\tfrac{1}{10,000}t}}\cdot {{e}^{-{{\left( \tfrac{t}{10,000} \right)}^{6}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To compute an approximate reliability function for this system, &amp;lt;math&amp;gt;{{R}_{A}}(t)\simeq {{R}_{S}}(t)\,\!&amp;lt;/math&amp;gt;, one would compute &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; pairs of reliability and time values and then fit a single distribution to the data, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{S}}(t= &amp;amp; 10,396.7)=10% \\ &lt;br /&gt;
{{R}_{S}}(t= &amp;amp; 9,361.9)=20% \\ &lt;br /&gt;
&amp;amp; ... \\ &lt;br /&gt;
{{R}_{S}}(t= &amp;amp; 1,053.6)=90%  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A single distribution, &amp;lt;math&amp;gt;{{R}_{A}}(t)\,\!&amp;lt;/math&amp;gt;, that approximates &amp;lt;math&amp;gt;{{R}_{S}}(t)\,\!&amp;lt;/math&amp;gt; can now be computed from these pairs using life data analysis methods.  If using the Weibull++ software, one would enter the values as free-form data.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
{{:Example Using a Distribution to Approximate the CDF}}&lt;br /&gt;
&lt;br /&gt;
==Duty Cycle==&lt;br /&gt;
Components of a system may not operate continuously during a system&#039;s mission, or may be subjected to loads greater or lesser than the rated loads during system operation.  To model this, a factor called the Duty Cycle ( &amp;lt;math&amp;gt;{{d}_{c}}\,\!&amp;lt;/math&amp;gt; ) is used.  The duty cycle may also be used to account for changes in environmental stress, such as temperature changes, that may effect the operation of a component.  The duty cycle is a positive value, with a default value of 1 representing continuous operation at rated load, and any values other than 1 representing other load values with respect to the rated load value (or total operating time).   A duty cycle value higher than 1 indicates a load in excess of the rated value.  A duty cycle value lower than 1 indicates that the component is operating at a load lower than the rated load or not operating continuously during the system&#039;s mission.  For instance, a duty cycle of 0.5 may be used for a component that operates only half of the time during the system&#039;s mission.&lt;br /&gt;
&lt;br /&gt;
The reliability metrics for a component with a duty cycle are calculated as follows. Let &amp;lt;math&amp;gt;{{d}_{c}}\,\!&amp;lt;/math&amp;gt; represent the duty cycle during a particular mission of the component, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; represent the mission time and &amp;lt;math&amp;gt;{t}&#039;\,\!&amp;lt;/math&amp;gt; represent the accumulated age. Then:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{t}&#039;={{d}_{c}}\times t\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability equation for the component is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R({t}&#039;)=R({{d}_{c}}\times t)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The component &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f({t}&#039;)=-\frac{d(R({t}&#039;))}{dt}=-\frac{d(R({{d}_{c}}\times t))}{dt}={{d}_{c}}f({{d}_{c}}\times t)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The failure rate of the component is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda ({t}&#039;)=\frac{f({t}&#039;)}{R({t}&#039;)}=\frac{{{d}_{c}}f({{d}_{c}}\times t)}{R({{d}_{c}}\times t)}={{d}_{c}}\lambda ({{d}_{c}}\times t)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Example===&amp;lt;!-- THIS SECTION HEADER IS LINKED TO The PAGE TRANSCLUDED IN THIS SECTION. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
{{:Example Calculating System Reliability with Duty Cycles}}&lt;br /&gt;
&lt;br /&gt;
==Load Sharing==&lt;br /&gt;
As presented in earlier chapters, a reliability block diagram (RBD) allows you to graphically represent how the components within a system are reliability-wise connected. In most cases, independence is assumed across the components within the system. For example, the failure of component A does not affect the failure of component B. However, if a system consists of components that are sharing a load, then the assumption of independence no longer holds true.&lt;br /&gt;
&lt;br /&gt;
If one component fails, then the component(s) that are still operating will have to assume the failed unit&#039;s portion of the load. Therefore, the reliabilities of the surviving unit(s) will change. Calculating the system reliability is no longer an easy proposition. In the case of load sharing components, the change of the failure distributions of the surviving components must be known in order to determine the system&#039;s reliability.&lt;br /&gt;
&lt;br /&gt;
To illustrate this, consider a system of two units connected reliability-wise in parallel as shown below.&lt;br /&gt;
[[Image:5-16.png|center|400px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Two units connected reliability-wise in parallel.&amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
Assume that the units must supply an output of 8 volts and that if both units are operational, each unit is to supply 50% of the total output. If one of the units fails, then the surviving unit supplies 100%. Furthermore, assume that having to supply the entire load has a negative impact on the reliability characteristics of the surviving unit. &lt;br /&gt;
&lt;br /&gt;
Because the reliability characteristics of the unit change based on the load it is sharing, a method that can model the effect of the load on life should be used. One way to do this is to use a life distribution along with a life-stress relationship (as discussed in [[Statistical_Background#A_Brief_Introduction_to_Life-Stress_Relationships|A Brief Introduction to Life-Stress Relationships]]) for each component. The detailed discussion for this method can be found at [[Additional Information on Load Sharing]]. Another simple way is to use the concept of acceleration factors and assume that the load has a linear effect on the failure time. If the load is doubled, then the life of the component will be shortened by half. &lt;br /&gt;
&lt;br /&gt;
For the above load sharing system, the reliability of each component is a function of time and load. For example, for Unit 1, the reliability and the probability density function are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{1}}(t,{{S}_{1}})\,\!\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{f}_{1}}(t,{{S}_{1}})\,\!\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{S}_{1}}\,\!\,\!&amp;lt;/math&amp;gt; is the load shared by Unit 1 at time &#039;&#039;t&#039;&#039; and the total load of the system is &amp;lt;math&amp;gt;S={{S}_{1}}+{{S}_{2}}\,\!\,\!&amp;lt;/math&amp;gt;. At the beginning, both units are working. Assume that Unit 1 fails at time &#039;&#039;x&#039;&#039; and Unit 2 takes over the entire load. The reliability for Unit 2 at time &#039;&#039;x&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{R}_{2}}(x,{{S}_{2}})={{R}_{2}}({{t}_{2e}},S) \\ &lt;br /&gt;
 &amp;amp; {{t}_{2e}}=\frac{{{S}_{2}}}{S}x \\ &lt;br /&gt;
\end{align}\,\!\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;{{t}_{2e}}\,\!\,\!&amp;lt;/math&amp;gt; is the equivalent time for Unit 2 at time &#039;&#039;x&#039;&#039; if it is operated with load &#039;&#039;S&#039;&#039;. The equivalent time concept is illustrated in the following plot. &lt;br /&gt;
[[Image:BS5.19.png|center|600px| Illustrating &amp;lt;math&amp;gt;{{t}_{e}}\,\!&amp;lt;/math&amp;gt;|link=]] &lt;br /&gt;
The system reliability at time &#039;&#039;t&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R(t)={{R}_{1}}(t,{{S}_{1}})\cdot {{R}_{2}}(t,{{S}_{2}}) \\ &lt;br /&gt;
 &amp;amp; +\int_{0}^{t}{{{f}_{1}}(x,{{S}_{1}})\cdot {{R}_{2}}({{t}_{2e}}+(t-x),S)dx} \\ &lt;br /&gt;
 &amp;amp; +\int_{0}^{t}{{{f}_{2}}(x,{{S}_{2}})\cdot {{R}_{1}}({{t}_{1e}}+(t-x),S)dx}  &lt;br /&gt;
\end{align}\,\!\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In BlockSim, the failure time distribution for each component is defined at the load of &#039;&#039;S&#039;&#039;. The reliability function for a component at a given load is calculated as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{i}}(t,{{S}_{i}})={{R}_{i}}(t\times \frac{{{S}_{i}}}{S},S)\,\!\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
::&amp;lt;math&amp;gt;{{f}_{i}}(t,{{S}_{i}})=\frac{{{S}_{i}}}{S}{{f}_{i}}\left( t\times \frac{{{S}_{i}}}{S},S \right)\,\!\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
From the above equation, it can be seen that the concept used in the calculation for load sharing is the same as the concept used in the calculation for [[Time-Dependent_System_Reliability_(Analytical)#Duty_Cycle|duty cycle]].&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
In the following load sharing system, Block 1 follows a Weibull failure distribution with&amp;lt;math&amp;gt;{{\beta }_{1}}=1.5\,\!&amp;lt;/math&amp;gt;, and&amp;lt;math&amp;gt;{{\eta }_{1}}=1,000\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
. Block 2 follows a Weibull failure distribution with&amp;lt;math&amp;gt;{{\beta }_{2}}=2\,\!&amp;lt;/math&amp;gt;, and&amp;lt;math&amp;gt;{{\eta }_{2}}=2,000\,\!&amp;lt;/math&amp;gt;. The load for Block 1 is 1 unit, and for Block 2 it is 3 units. Calculate the system reliability at time 1,500. &lt;br /&gt;
&lt;br /&gt;
[[Image:loadsharingconfig.png|center|345px| Load Sharing System|link=]] &lt;br /&gt;
Block 1 shares 25% (P1) of the entire load, and Block 2 shares 75% (P2) of it. Therefore, we have the following equations for calculating the system reliability:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{i}}(t,{{S}_{i}})={{R}_{i}}({{P}_{i}}\times t)\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
::&amp;lt;math&amp;gt;{{f}_{i}}(x,{{S}_{i}})={{P}_{i}}{{f}_{i}}({{P}_{i}}\times x)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{1}}({{t}_{1e}}+(t-x))={{R}_{1}}({{P}_{1}}x+t-x)={{R}_{1}}(t-{{P}_{2}}x)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
::&amp;lt;math&amp;gt;{{R}_{2}}({{t}_{2e}}+(t-x))={{R}_{2}}({{P}_{2}}x+t-x)={{R}_{2}}(t-{{P}_{1}}x)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using the above equations in the system reliability function, we get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R(t)={{R}_{1}}(t,{{S}_{1}})\cdot {{R}_{2}}(t,{{S}_{2}})+\int_{0}^{t}{{{f}_{1}}(x,{{S}_{1}})\cdot {{R}_{2}}({{t}_{2e}}+(t-x),S)dx} \\ &lt;br /&gt;
 &amp;amp; +\int_{0}^{t}{{{f}_{2}}(x,{{S}_{2}})\cdot {{R}_{1}}({{t}_{1e}}+(t-x),S)dx} \\ &lt;br /&gt;
 &amp;amp; ={{R}_{1}}({{P}_{1}}\times t)\cdot {{R}_{2}}({{P}_{2}}\times t)+\int_{0}^{t}{{{P}_{1}}{{f}_{1}}({{P}_{1}}x)\cdot {{R}_{2}}(t-{{P}_{1}}x)dx} \\ &lt;br /&gt;
 &amp;amp; +\int_{0}^{t}{{{P}_{2}}{{f}_{2}}({{P}_{2}}x)\cdot {{R}_{1}}(t-{{P}_{2}}x)dx}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The calculated system reliability at time 1,500 is 0.8569, as given below. &lt;br /&gt;
[[Image:loadsharingresults.png|center|450px|Calculated System Reliability at Time 1,500|link=]]&lt;br /&gt;
&lt;br /&gt;
==Standby Components== &amp;lt;!-- THIS SECTION HEADER IS LINKED TO: Standby_Configuration_Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
In the previous section, the case of a system with load sharing components was presented.  This is a form of redundancy with dependent components. That is, the failure of one component affects the failure of the other(s).  This section presents another form of redundancy: standby redundancy.  In standby redundancy the redundant components are set to be under a lighter load condition (or no load) while not needed and under the operating load when they are activated.&lt;br /&gt;
&lt;br /&gt;
In standby redundancy the components are set to have two states: an active state and a standby state.  Components in standby redundancy have two failure distributions, one for each state.  When in the standby state, they have a quiescent (or dormant) failure distribution and when operating, they have an active failure distribution.&lt;br /&gt;
&lt;br /&gt;
In the case that both quiescent and active failure distributions are the same, the units are in a simple parallel configuration (also called a hot standby configuration).  When the rate of failure of the standby component is lower in quiescent mode than in active mode, that is called a warm standby configuration.  When the rate of failure of the standby component is zero in quiescent mode (i.e., the component cannot fail when in standby), that is called a cold standby configuration.  &lt;br /&gt;
&lt;br /&gt;
===Simple Standby Configuration===&lt;br /&gt;
Consider two components in a standby configuration.  Component 1 is the active component with a Weibull failure distribution with parameters &amp;lt;math&amp;gt;\beta = 1.5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta  = 1,000 \,\!&amp;lt;/math&amp;gt;.  Component 2 is the standby component.  When Component 2 is operating, it also has a Weibull failure distribution with &amp;lt;math&amp;gt;\beta  = 1.5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta  = 1,000 \,\!&amp;lt;/math&amp;gt;.  Furthermore, assume the following cases for the quiescent distribution.&lt;br /&gt;
&lt;br /&gt;
:*Case 1:  The quiescent distribution is the same as the active distribution (hot standby).&lt;br /&gt;
&lt;br /&gt;
:*Case 2:  The quiescent distribution is a Weibull  distribution with &amp;lt;math&amp;gt;\beta = 1.5\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta = 2000\,\!&amp;lt;/math&amp;gt; (warm standby).&lt;br /&gt;
&lt;br /&gt;
:*Case 3: The component cannot fail in quiescent mode (cold standby).&lt;br /&gt;
&lt;br /&gt;
In this case, the reliability of the system at some time, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, can be obtained using the following equation:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t)={{R}_{1}}(t)+\underset{0}{\overset{t}{\mathop \int }}\,{{f}_{1}}(x)\cdot {{R}_{2;SB}}(x)\cdot \frac{{{R}_{2;A}}({{t}_{e}}+t-x)}{{{R}_{2;A}}({{t}_{e}})}dx   \ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;math&amp;gt;{{R}_{1}}\,\!&amp;lt;/math&amp;gt; is the reliability of the active component.&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;math&amp;gt;{{f}_{1}}\,\!&amp;lt;/math&amp;gt; is the &#039;&#039;pdf&#039;&#039; of the active component.&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;math&amp;gt;{{R}_{2;SB}}\,\!&amp;lt;/math&amp;gt; is the reliability of the standby component when in standby mode (quiescent reliability).&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;math&amp;gt;{{R}_{2;A}}\,\!&amp;lt;/math&amp;gt; is the reliability of the standby component when in active mode.&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;math&amp;gt;{{t}_{e}}\,\!&amp;lt;/math&amp;gt; is the equivalent operating time for the standby unit if it had been operating at an active mode, such that:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{2;SB}}(x)={{R}_{2;A}}({{t}_{e}})  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The second equation above can be solved for &amp;lt;math&amp;gt;{{t}_{e}}\,\!&amp;lt;/math&amp;gt; and substituted into the first equation above.&lt;br /&gt;
The following figure illustrates the example as entered in BlockSim using a standby container.&lt;br /&gt;
&lt;br /&gt;
[[Image:5_24_new.png|center|200px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Standby container.&amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
The active and standby blocks are within a container, which is used to specify standby redundancy.  Since the standby component has two distributions (active and quiescent), the Block Properties window of the standby block has two pages for specifying each one.  The following figures illustrate these pages.&lt;br /&gt;
[[Image: Fig 5.25.PNG|center|600px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Defining the active failure distribution &amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Fig 5.26.PNG|center|600px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Defining the quiescent failure distribution &amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
 &lt;br /&gt;
The system reliability results for 1000 hours are given in the following table:&lt;br /&gt;
&lt;br /&gt;
[[Image:5-24.png|center|250px|link=|]]&lt;br /&gt;
 &lt;br /&gt;
Note that even though the &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; value for the quiescent distribution is the same as in the active distribution, it is possible that the two can be different. That is, the failure modes present during the quiescent mode could be different from the modes present during the active mode.  In that sense, the two distribution types can be different as well (e.g., lognormal when quiescent and Weibull when active).&lt;br /&gt;
&lt;br /&gt;
In many cases when considering standby systems, a switching device may also be present that switches from the failed active component to the standby component.  The reliability of the switch can also be incorporated into &lt;br /&gt;
&amp;lt;math&amp;gt;R(t)={{R}_{1}}(t)+\underset{0}{\overset{t}{\mathop \int }}\,{{f}_{1}}(x)\cdot {{R}_{2;SB}}(x)\cdot \frac{{{R}_{2;A}}({{t}_{e}}+t-x)}{{{R}_{2;A}}({{t}_{e}})}dx   \ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
as presented in the next section.&lt;br /&gt;
&lt;br /&gt;
BlockSim&#039;s System Reliability Equation window returns a single token for the reliability of units in a standby configuration.  This is the same as the load sharing case presented in the previous section.&lt;br /&gt;
&lt;br /&gt;
===Reliability of Standby Systems with a Switching Device===&lt;br /&gt;
In many cases when dealing with standby systems, a switching device is present that will switch to the standby component when the active component fails.  Therefore, the failure properties of the switch must also be included in the analysis.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS5_26_2_new.png|center|250px|link=]]&lt;br /&gt;
&lt;br /&gt;
In most cases when the reliability of a switch is to be included in the analysis, two probabilities can be considered.  The first and most common one is the probability of the switch performing the action (i.e., switching) when requested to do so.  This is called Switch Probability per Request in BlockSim and is expressed as a static probability (e.g., 90%).  The second probability is the quiescent reliability of the switch.  This is the reliability of the switch as it ages (e.g., the switch might wear out with age due to corrosion, material degradation, etc.). Thus it is possible for the switch to fail before the active component fails.  However, a switch failure does not cause the system to fail, but rather causes the system to fail only if the switch is needed and the switch has failed.  For example, if the active component does not fail until the mission end time and the switch fails, then the system does not fail.  However, if the active component fails and the switch has also failed, then the system cannot be switched to the standby component and it therefore fails.&lt;br /&gt;
&lt;br /&gt;
In analyzing standby components with a switching device, either or both failure probabilities (during the switching or while waiting to switch) can be considered for the switch, since each probability can represent different failure modes.  For example, the switch probability per request may represent software-related issues or the probability of detecting the failure of an active component, and the quiescent probability may represent wear-out type failures of the switch.&lt;br /&gt;
&lt;br /&gt;
To illustrate the formulation, consider the previous example that assumes perfect switching.  To examine the effects of including an imperfect switch, assume that when the active component fails there is a 90% probability that the switch will switch from the active component to the standby component.  In addition, assume that the switch can also fail due to a wear-out failure mode described by a Weibull distribution with &amp;lt;math&amp;gt;\beta = 1.7\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta = 5000\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Therefore, the reliability of the system at some time, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, is given by the following equation.&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t)= &amp;amp; {{R}_{1}}(t) +\underset{0}{\overset{t}{\mathop \int }}\,\{{{f}_{1}}(x)\cdot {{R}_{2;SB}}(x) \\ &lt;br /&gt;
&amp;amp; \cdot \frac{{{R}_{2;A}}({{t}_{e}}+t-x)}{{{R}_{2;A}}({{t}_{e}})}\cdot {{R}_{SW;Q}}(x)\cdot {{R}_{SW;REQ}}(x)\}dx  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;math&amp;gt;{{R}_{1}}\,\!&amp;lt;/math&amp;gt; is the reliability of the active component.&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;math&amp;gt;{{f}_{1}}\,\!&amp;lt;/math&amp;gt; is the &#039;&#039;pdf&#039;&#039; of the active component.&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;math&amp;gt;{{R}_{2;SB}}\,\!&amp;lt;/math&amp;gt; is the reliability of the standby component when in standby mode (quiescent reliability).&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;math&amp;gt;{{R}_{2;A}}\,\!&amp;lt;/math&amp;gt; is the reliability of the standby component when in active mode.&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;math&amp;gt;{{R}_{SW;Q}}\,\!&amp;lt;/math&amp;gt; is the quiescent reliability of the switch.&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;math&amp;gt;{{R}_{SW;REQ}}\,\!&amp;lt;/math&amp;gt; is the switch probability per request.&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;math&amp;gt;{{t}_{e}}\,\!&amp;lt;/math&amp;gt; is the equivalent operating time for the standby unit if it had been operating at an active mode.&lt;br /&gt;
&lt;br /&gt;
This problem can be solved in BlockSim by including these probabilities in the container&#039;s properties, as shown in the figures below.  In BlockSim, the standby container is acting as the switch.&lt;br /&gt;
 &lt;br /&gt;
[[Image:Fig 5.28_2.PNG|center|600px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Standby container (switch) failure probabilities while attempting to switch &amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Fig 5.27.PNG|center|600px|&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt; Standby container (switch) failure distribution while waiting to switch &amp;lt;/div&amp;gt;|link=]]&lt;br /&gt;
&lt;br /&gt;
Note that there are additional properties that can be specified in BlockSim for a switch, such as Switch Restart Probability, No. of Restarts and Switch Delay Time.  In many applications, the switch is re-tested (or re-cycled) if it fails to switch the first time.  In these cases, it might be possible that it switches in the second or third, or &amp;lt;math&amp;gt;{{n}^{th}}\,\!&amp;lt;/math&amp;gt; attempt.  &lt;br /&gt;
&lt;br /&gt;
The Switch Restart Probability specifies each additional attempt&#039;s probability of successfully switching and the Finite Restarts specifies the total number of attempts.  Note that the Switch Restart Probability specifies the probability of success of each trial (or attempt).  The probability of success of &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; consecutive trials is calculated by BlockSim using the binomial distribution and this probability is then incorporated into equation above.  The Switch Delay Time property is related to repairable systems and is considered in BlockSim only when using simulation.  When using the analytical solution (i.e., for a non-repairable system), this property is ignored.&lt;br /&gt;
&lt;br /&gt;
Solving the analytical solution (as given by the above equation), the following results are obtained.&lt;br /&gt;
&lt;br /&gt;
[[Image:5-30.png|center|600px]]&lt;br /&gt;
 &lt;br /&gt;
From the table above, it can be seen that the presence of a switching device has a significant effect on the reliability of a standby system.  It is therefore important when modeling standby redundancy to incorporate the switching device reliability properties.  It should be noted that this methodology is not the same as treating the switching device as another series component with the standby subsystem.  This would be valid only if the failure of the switch resulted in the failure of system (e.g., switch failing open).  In equation above, the Switch Probability per Request and quiescent probability are present only in the second term of the equation.  Treating these two failure modes as a series configuration with the standby subsystem would imply that they are also present when the active component is functioning (i.e., first term of equation above).  This is invalid and would result in the underestimation of the reliability of the system.  In other words, these two failure modes become significant only when the active component fails.&lt;br /&gt;
&lt;br /&gt;
As an example, and if we consider the warm standby case, the reliability of the system without the switch is 70.57% at 1000 hours.  If the system was modeled so that the switching device was in series with the warm standby subsystem, the result would have been:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{S}}(1000)= &amp;amp; {{R}_{Standby}}(1000)\cdot {{R}_{sw,Q(1000)}}\cdot {{R}_{sw,req}} \\ &lt;br /&gt;
= &amp;amp; 0.7057\cdot 0.9372\cdot 0.9 \\ &lt;br /&gt;
= &amp;amp; 0.5952  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the case where a switch failure mode causes the standby subsystem to fail, then this mode can be modeled as an individual block in series with the standby subsystem.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
{{:Standby Configuration Example}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;noprint&amp;quot;&amp;gt;&lt;br /&gt;
{{Examples Box|BlockSim_Examples|&amp;lt;p&amp;gt;More examples on load sharing and standby configurations are available! See also:&amp;lt;/p&amp;gt; &lt;br /&gt;
{{Examples Link External|http://www.reliasoft.com/BlockSim/examples/rc4/index.htm|Modeling Failure Modes}}&amp;lt;nowiki/&amp;gt;&lt;br /&gt;
{{Examples Link|Load_Sharing_Configuration_Example|Load Sharing Configuration Example}}}}&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Note Regarding Numerical Integration Solutions==&lt;br /&gt;
Load sharing and standby solutions in BlockSim are performed using numerical integration routines. As with any numerical analysis routine, the solution error depends on the number of iterations performed, the step size chosen and related factors, plus the behavior of the underlying function. By default, BlockSim uses a certain set of preset factors. In general, these defaults are sufficient for most problems. If a higher precision or verification of the precision for a specific problem is required, BlockSim&#039;s preset options can be modified and/or the integration error can be assessed using the &#039;&#039;&#039;Integration Parameters&#039;&#039;&#039; option for each container. For more details, you can refer to the documentation on the Algorithm Setup window in the BlockSim help file.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Brief_Statistical_Background&amp;diff=64919</id>
		<title>Brief Statistical Background</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Brief_Statistical_Background&amp;diff=64919"/>
		<updated>2017-02-08T20:00:08Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Conditional Reliability Function */ changed R(T,t) to R(t|T)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Navigation box}}[[Category:Shared Articles]]&lt;br /&gt;
&#039;&#039;This article also appears in the [[Basic_Statistical_Background|Life Data Analysis Reference]], [[Appendix_A:_Brief_Statistical_Background|Accelerated Life Testing Data Analysis Reference]] and [[Statistical_Background#A_Brief_Introduction_to_Continuous_Life_Distributions|System Analysis Reference]] books.&#039;&#039; &amp;lt;/noinclude&amp;gt;&lt;br /&gt;
===Random Variables===&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Image:chp3randomvariables.png|center|150px|link=|]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In general, most problems in reliability engineering deal with quantitative measures, such as the time-to-failure of a component, or qualitative measures, such as whether a component is defective or non-defective. We can then use a random variable &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; to denote these possible measures.&lt;br /&gt;
&lt;br /&gt;
In the case of times-to-failure, our random variable &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; is the time-to-failure of the component and can take on an infinite number of possible values in a range from 0 to infinity (since we do not know the exact time &#039;&#039;a priori&#039;&#039;). Our component can be found failed at any time after time 0 (e.g., at 12 hours or at 100 hours and so forth), thus &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; can take on any value in this range. In this case, our random variable &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; is said to be a &#039;&#039;continuous random variable&#039;&#039;. In this reference, we will deal almost exclusively with continuous random variables.&lt;br /&gt;
&lt;br /&gt;
In judging a component to be defective or non-defective, only two outcomes are possible. That is, &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; is a random variable that can take on one of only two values (let&#039;s say defective = 0 and non-defective = 1). In this case, the variable is said to be a discrete random variable.&lt;br /&gt;
&lt;br /&gt;
===The Probability Density Function and the Cumulative Distribution Function===&lt;br /&gt;
&lt;br /&gt;
The probability density function (&#039;&#039;pdf&#039;&#039;) and cumulative distribution function (&#039;&#039;cdf&#039;&#039;) are two of the most important statistical functions in reliability and are very closely related. When these functions are known, almost any other reliability measure of interest can be derived or obtained. We will now take a closer look at these functions and how they relate to other reliability measures, such as the reliability function and failure rate.&lt;br /&gt;
&lt;br /&gt;
From probability and statistics, given a continuous random variable &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; we denote:&lt;br /&gt;
&lt;br /&gt;
:*The probability density function,  &#039;&#039;pdf&#039;&#039;, as &amp;lt;math&amp;gt;f(x)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
:*The cumulative distribution function, &#039;&#039;cdf&#039;&#039;, as &amp;lt;math&amp;gt;F(x)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The  &#039;&#039;pdf&#039;&#039;  and  &#039;&#039;cdf&#039;&#039;  give a complete description of the probability distribution of a random variable. The following figure illustrates a &#039;&#039;pdf&#039;&#039;.&lt;br /&gt;
[[Image:3.3.png|center|400px|Example of a &#039;&#039;pdf&#039;&#039;.|link=]]&lt;br /&gt;
&lt;br /&gt;
The next figures  illustrate the  &#039;&#039;pdf&#039;&#039; - &#039;&#039;cdf&#039;&#039;  relationship.&lt;br /&gt;
[[Image:chp3pdf.png|center|550px|Graphical representation of the relationship between &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;.|link=]]&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; is a continuous random variable, then the &#039;&#039;pdf&#039;&#039; of &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; is a function, &amp;lt;math&amp;gt;f(x)\,\!&amp;lt;/math&amp;gt;, such that for any two numbers, &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;a\le b\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;P(a\le X\le b)=\int_{a}^{b}f(x)dx\ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That is, the probability that &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; takes on a value in the interval &amp;lt;math&amp;gt;[a,b]\,\!&amp;lt;/math&amp;gt; is the area under the density function from &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;b,\,\!&amp;lt;/math&amp;gt; as shown above. The  &#039;&#039;pdf&#039;&#039;  represents the relative frequency of failure times as a function of time.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;cdf&#039;&#039; is a function, &amp;lt;math&amp;gt;F(x)\,\!&amp;lt;/math&amp;gt;, of a random variable &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;, and is defined for a number &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(x)=P(X\le x)=\int_{0}^{x}f(s)ds\ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That is, for a number &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;F(x)\,\!&amp;lt;/math&amp;gt; is the probability that the observed value of &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; will be at most &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;. The  &#039;&#039;cdf&#039;&#039; represents the cumulative values of the  &#039;&#039;pdf&#039;&#039;. That is, the value of a point on the curve of the  &#039;&#039;cdf&#039;&#039;  represents the area under the curve to the left of that point on the  &#039;&#039;pdf&#039;&#039;. In reliability, the  &#039;&#039;cdf&#039;&#039;  is used to measure the probability that the item in question will fail before the associated time value, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, and is also called &#039;&#039;unreliability&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Note that depending on the density function, denoted by &amp;lt;math&amp;gt;f(x)\,\!&amp;lt;/math&amp;gt;, the limits will vary based on the region over which the distribution is defined. For example, for the life distributions considered in this reference, with the exception of the normal distribution, this range would be &amp;lt;math&amp;gt;[0,+\infty ].\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
====Mathematical Relationship: &#039;&#039;pdf&#039;&#039; and &#039;&#039;cdf&#039;&#039;====&lt;br /&gt;
The mathematical relationship between the  &#039;&#039;pdf&#039;&#039;  and  &#039;&#039;cdf&#039;&#039;  is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(x)=\int_{0}^{x}f(s)ds \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;s\,\!&amp;lt;/math&amp;gt; is a dummy integration variable.&lt;br /&gt;
&lt;br /&gt;
Conversely: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(x)=\frac{d(F(x))}{dx}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The  &#039;&#039;cdf&#039;&#039;  is the area under the probability density function up to a value of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;. The total area under the &#039;&#039;pdf&#039;&#039; is always equal to 1, or mathematically:&lt;br /&gt;
&lt;br /&gt;
[[Image:3.5.png|center|400px|Total area under a &#039;&#039;pdf&#039;&#039;.|link=]]&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty}^{+\infty }f(x)dx=1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The well-known normal (or Gaussian) distribution is an example of a probability density function. The &#039;&#039;pdf&#039;&#039;  for this distribution is given by:  &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{1}{\sigma \sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{t-\mu }{\sigma } \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; is the mean and &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt; is the standard deviation. The normal distribution has two parameters, &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another is the lognormal distribution, whose &#039;&#039;pdf&#039;&#039; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{1}{t\cdot {{\sigma }^{\prime }}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{{t}^{\prime }}-{{\mu }^{\prime }}}{{{\sigma }^{\prime }}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{\mu }&#039;\,\!&amp;lt;/math&amp;gt; is the mean of the natural logarithms of the times-to-failure and &amp;lt;math&amp;gt;{\sigma }&#039;\,\!&amp;lt;/math&amp;gt; is the standard deviation of the natural logarithms of the times-to-failure. Again, this is a 2-parameter distribution.&lt;br /&gt;
&lt;br /&gt;
===Reliability Function===&lt;br /&gt;
The reliability function can be derived using the previous definition of the cumulative distribution function, &amp;lt;math&amp;gt;F(x)=\int_{0}^{x}f(s)ds \,\!&amp;lt;/math&amp;gt;. From our definition of the  &#039;&#039;cdf&#039;&#039;, the probability of an event occurring by time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(t)=\int_{0}^{t}f(s)ds\ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or, one could equate this event to the probability of a unit failing by time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Since this function defines the probability of failure by a certain time, we could consider this the unreliability function. Subtracting this probability from 1 will give us the reliability function, one of the most important functions in life data analysis. The reliability function gives the probability of success of a unit undertaking a mission of a given time duration.&lt;br /&gt;
The following figure illustrates this.&lt;br /&gt;
&lt;br /&gt;
[[Image:3.6.png|center|400px|Reliability as area under &#039;&#039;pdf&#039;&#039;.|link=]]&lt;br /&gt;
&lt;br /&gt;
To show this mathematically, we first define the unreliability function, &amp;lt;math&amp;gt;Q(t)\,\!&amp;lt;/math&amp;gt;, which is the probability of failure, or the probability that our time-to-failure is in the region of 0 and &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;. This is the same as the &#039;&#039;cdf&#039;&#039;. So from &amp;lt;math&amp;gt;F(t)=\int_{0}^{t}f(s)ds\ \,\!&amp;lt;/math&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Q(t)=F(t)=\int_{0}^{t}f(s)ds\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reliability and unreliability are the only two events being considered and they are mutually exclusive; hence, the sum of these probabilities is equal to unity. &lt;br /&gt;
&lt;br /&gt;
Then: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   Q(t)+R(t)= &amp;amp; 1 \\ &lt;br /&gt;
  R(t)= &amp;amp; 1-Q(t) \\ &lt;br /&gt;
  R(t)= &amp;amp; 1-\int_{0}^{t}f(s)ds \\ &lt;br /&gt;
  R(t)= &amp;amp; \int_{t}^{\infty }f(s)ds  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Conversely: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=-\frac{d(R(t))}{dt}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conditional Reliability Function===&lt;br /&gt;
Conditional reliability is the probability of successfully completing another mission following the successful completion of a previous mission. The time of the previous mission and the time for the mission to be undertaken must be taken into account for conditional reliability calculations. The conditional reliability function is given by:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t|T)=\frac{R(T+t)}{R(T)}\ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Failure Rate Function===&lt;br /&gt;
&lt;br /&gt;
The failure rate function enables the determination of the number of failures occurring per unit time. Omitting the derivation, the failure rate is mathematically given as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\lambda (t)=\frac{f(t)}{R(t)}\ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives the instantaneous failure rate, also known as the hazard function. It is useful in characterizing the failure behavior of a component, determining maintenance crew allocation, planning for spares provisioning, etc. Failure rate is denoted as failures per unit time.&lt;br /&gt;
&lt;br /&gt;
===Mean Life (MTTF)===&lt;br /&gt;
&lt;br /&gt;
The mean life function, which provides a measure of the average time of operation to failure, is given by: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=m=\int_{0}^{\infty }t\cdot f(t)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the expected or average time-to-failure and is denoted as the MTTF (Mean Time To Failure).  &lt;br /&gt;
&lt;br /&gt;
The MTTF, even though an index of reliability performance, does not give any information on the failure distribution of the component in question when dealing with most lifetime distributions. Because vastly different distributions can have identical means, it is unwise to use the MTTF as the sole measure of the reliability of a component.&lt;br /&gt;
&lt;br /&gt;
===Median Life===&lt;br /&gt;
Median life, &lt;br /&gt;
&amp;lt;math&amp;gt;\tilde{T}\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
is the value of the random variable that has exactly one-half of the area under the &#039;&#039;pdf&#039;&#039; to its left and one-half to its right. &lt;br /&gt;
It represents the centroid of the distribution. &lt;br /&gt;
The median is obtained by solving the following equation for &amp;lt;math&amp;gt;\breve{T}\,\!&amp;lt;/math&amp;gt;. (For individual data, the median is the midpoint value.)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::&amp;lt;math&amp;gt;\int_{-\infty}^{{\breve{T}}}f(t)dt=0.5\ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Modal Life (or Mode)===&lt;br /&gt;
The modal life (or mode), &amp;lt;math&amp;gt;\tilde{T}\,\!&amp;lt;/math&amp;gt;, is the value of &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; that satisfies: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{d\left[ f(t) \right]}{dt}=0\ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a continuous distribution, the mode is that value of &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; that corresponds to the maximum probability density (the value at which the &#039;&#039;pdf&#039;&#039; has its maximum value, or the peak of the curve).&lt;br /&gt;
&lt;br /&gt;
===Lifetime Distributions===&lt;br /&gt;
A statistical distribution is fully described by its  &#039;&#039;pdf&#039;&#039;.  In the previous sections, we used the definition of the &#039;&#039;pdf&#039;&#039; to show how all other functions most commonly used in reliability engineering and life data analysis can be derived.  The reliability function, failure rate function, mean time function, and median life function can be determined directly from the  &#039;&#039;pdf&#039;&#039; definition, or &amp;lt;math&amp;gt;f(t)\,\!&amp;lt;/math&amp;gt;.  Different distributions exist, such as the normal (Gaussian), exponential, Weibull, etc., and each has a predefined form of &amp;lt;math&amp;gt;f(t)\,\!&amp;lt;/math&amp;gt; that can be found in many references.  In fact, there are certain references that are devoted exclusively to different types of statistical distributions.  These distributions were formulated by statisticians, mathematicians and engineers to mathematically model or represent certain behavior.  For example, the Weibull distribution was formulated by Waloddi Weibull and thus it bears his name.  Some distributions tend to better represent life data and are most commonly called &amp;quot;lifetime distributions&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
A more detailed introduction to this topic is presented in [[Life Distributions]].&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Warranty_Data_Analysis&amp;diff=64918</id>
		<title>Warranty Data Analysis</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Warranty_Data_Analysis&amp;diff=64918"/>
		<updated>2017-02-08T19:57:22Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:LDABOOK|19|Warranty Data Analysis}}&lt;br /&gt;
The Weibull++ warranty analysis folio provides four different data entry formats for warranty claims data. It allows the user to automatically perform life data analysis, predict future failures (through the use of conditional probability analysis), and provides a method for detecting outliers. The four data-entry formats for storing sales and returns information are: &lt;br /&gt;
&lt;br /&gt;
:1)	Nevada Chart Format&lt;br /&gt;
:2)	Time-to-Failure Format&lt;br /&gt;
:3)	Dates of Failure Format&lt;br /&gt;
:4)	Usage Format&lt;br /&gt;
&lt;br /&gt;
These formats are explained in the next sections. We will also discuss some specific warranty analysis calculations, including warranty predictions, analysis of non-homogeneous warranty data and using statistical process control (SPC) to monitor warranty returns.&lt;br /&gt;
&lt;br /&gt;
==Nevada Chart Format==&lt;br /&gt;
The Nevada format allows the user to convert shipping and warranty return data into the standard reliability data form of failures and suspensions so that it can easily be analyzed with traditional life data analysis methods. For each time period in which a number of products are shipped, there will be a certain number of returns or failures in subsequent time periods, while the rest of the population that was shipped will continue to operate in the following time periods. For example, if 500 units are shipped in May, and 10 of those units are warranty returns in June, that is equivalent to 10 failures at a time of one month. The other 490 units will go on to operate and possibly fail in the months that follow. This information can be arranged in a diagonal chart, as shown in the following figure.&lt;br /&gt;
&lt;br /&gt;
[[Image:Nevada-Chart-Illustration.png|center|450px| ]]&lt;br /&gt;
&lt;br /&gt;
At the end of the analysis period, all of the units that were shipped and have not failed in the time since shipment are considered to be suspensions. This process is repeated for each shipment and the results tabulated for each particular failure and suspension time prior to reliability analysis. This process may sound confusing, but it is actually just a matter of careful bookkeeping. The following example illustrates this process.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
&#039;&#039;&#039;Nevada Chart Format Calculations Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A company keeps track of its shipments and warranty returns on a month-by-month basis. The following table records the shipments in June, July and August, and the warranty returns through September:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;| ||colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|RETURNS&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot; style=&amp;quot;text-align:right;&amp;quot;|SHIP||Jul. 2010||Aug. 2010||Sep. 2010&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|Jun. 2010||100||3||3||5&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|Jul. 2010||140||-||2||4&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|Aug. 2010||150||-||-||4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We will examine the data month by month.  In June 100 units were sold, and in July 3 of these units were returned. This gives 3 failures at one month for the June shipment, which we will denote as &amp;lt;math&amp;gt;{{F}_{JUN,1}}=3\,\!&amp;lt;/math&amp;gt;. Likewise, 3 failures occurred in August and 5 occurred in September for this shipment, or &amp;lt;math&amp;gt;{{F}_{JUN,2}}=3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{F}_{JUN,3}}=5\,\!&amp;lt;/math&amp;gt;.  Consequently, at the end of our three-month analysis period, there were a total of 11 failures for the 100 units shipped in June. This means that 89 units are presumably still operating, and can be considered suspensions at three months, or &amp;lt;math&amp;gt;{{S}_{JUN,3}}=89\,\!&amp;lt;/math&amp;gt;. For the shipment of 140 in July, 2 were returned the following month, or &amp;lt;math&amp;gt;{{F}_{JUL,1}}=2\,\!&amp;lt;/math&amp;gt;, and 4 more were returned the month after that, or &amp;lt;math&amp;gt;{{F}_{JUL,2}}=4\,\!&amp;lt;/math&amp;gt;.  After two months, there are 134 ( &amp;lt;math&amp;gt;140-2-4=134\,\!&amp;lt;/math&amp;gt; ) units from the July shipment still operating, or &amp;lt;math&amp;gt;{{S}_{JUL,2}}=134\,\!&amp;lt;/math&amp;gt;. For the final shipment of 150 in August, 4 fail in September, or &amp;lt;math&amp;gt;{{F}_{AUG,1}}=4\,\!&amp;lt;/math&amp;gt;, with the remaining 146 units being suspensions at one month, or &amp;lt;math&amp;gt;{{S}_{AUG,1}}=146\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is now a simple matter to add up the number of failures for 1, 2, and 3 months, then add the suspensions to get our reliability data set:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   \text{Failures at 1 month:} &amp;amp; {{F}_{1}}={{F}_{JUN,1}}+{{F}_{JUL,1}}+{{F}_{AUG,1}}=3+2+4=9  \\&lt;br /&gt;
   \text{Suspensions at 1 month:} &amp;amp; {{S}_{1}}={{S}_{AUG,1}}=146  \\&lt;br /&gt;
   \text{Failures at 2 months:} &amp;amp; {{F}_{2}}={{F}_{JUN,2}}+{{F}_{JUL,2}}=3+4=7  \\&lt;br /&gt;
   \text{Suspensions at 2 months:} &amp;amp; {{S}_{2}}={{S}_{JUL,2}}=134  \\&lt;br /&gt;
   \text{Failures at 3 months:} &amp;amp; {{F}_{3}}={{F}_{JUN,3}}=5  \\&lt;br /&gt;
   \text{Suspensions at 3 months:} &amp;amp; {{S}_{JUN,3}}=89  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These calculations can be performed automatically in Weibull++. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;noprint&amp;quot;&amp;gt;&lt;br /&gt;
{{Examples Box|Weibull++_Examples|&amp;lt;p&amp;gt;More Nevada chart format warranty analysis examples are available! See also:&amp;lt;/p&amp;gt; &lt;br /&gt;
{{Examples Both|http://www.reliasoft.com/Weibull/examples/rc5/index.htm|Warranty Analysis Example|http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_5.html|Watch the video...}}&amp;lt;nowiki/&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Time-to-Failure Format==&lt;br /&gt;
This format is similar to the standard folio data entry format (all number of units, failure times and suspension times are entered by the user). The difference is that when the data is used within the context of warranty analysis, the ability to generate forecasts is available to the user.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
{{:Warranty_Data_Analysis_Times-to-Failure_Format_with_Plot_Example}}&lt;br /&gt;
&lt;br /&gt;
==Dates of Failure Format==&lt;br /&gt;
Another common way for reporting field information is to enter a date and quantity of sales or shipments (Quantity In-Service data) and the date and quantity of returns (Quantity Returned data). In order to identify which lot the unit comes from, a failure is identified by a return date and the date of when it was put in service. The date that the unit went into service is then associated with the lot going into service during that time period. You can use the optional Subset ID column in the data sheet to record any information to identify the lots.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
{{:Warranty_Data_Analysis_Dates_Format_Example}}&lt;br /&gt;
&lt;br /&gt;
==Usage Format==&lt;br /&gt;
Often, the driving factor for reliability is usage rather than time. For example, in the automotive industry, the failure behavior in the majority of the products is mileage-dependent rather than time-dependent. The usage format allows the user to convert shipping and warranty return data into the standard reliability data for of failures and suspensions when the return information is based on usage rather than return dates or periods. Similar to the dates of failure format, a failure is identified by the return number and the date of when it was put in service in order to identify which lot the unit comes from. The date that the returned unit went into service associates the returned unit with the lot it belonged to when it started operation. However, the return data is in terms of usage and not date of return. Therefore the usage of the units needs to be specified as a constant usage per unit time or as a distribution. This allows for determining the expected usage of the surviving units.&lt;br /&gt;
&lt;br /&gt;
Suppose that you have been collecting sales (units in service) and returns data. For the returns data, you can determine the number of failures and their usage by reading the odometer value, for example. Determining the number of surviving units (suspensions) and their ages is a straightforward step. By taking the difference between the analysis date and the date when a unit was put in service, you can determine the age of the surviving units.&lt;br /&gt;
&lt;br /&gt;
What is unknown, however, is the exact usage accumulated by each surviving unit. The key part of the usage-based warranty analysis is the determination of the usage of the surviving units based on their age. Therefore, the analyst needs to have an idea about the usage of the product. This can be obtained, for example, from customer surveys or by designing the products to collect usage data. For example, in automotive applications, engineers often use 12,000 miles/year as the average usage. Based on this average, the usage of an item that has been in the field for 6 months and has not yet failed would be 6,000 miles. So to obtain the usage of a suspension based on an average usage, one could take the time of each suspension and multiply it by this average usage. In this situation, the analysis becomes straightforward. With the usage values and the quantities of the returned units, a failure distribution can be constructed and subsequent warranty analysis becomes possible.&lt;br /&gt;
&lt;br /&gt;
Alternatively, and more realistically, instead of using an average usage, an actual distribution that reflects the variation in usage and customer behavior can be used. This distribution describes the usage of a unit over a certain time period (e.g., 1 year, 1 month, etc). This probabilistic model can be used to estimate the usage for all surviving components in service and the percentage of users running the product at different usage rates. In the automotive example, for instance, such a distribution can be used to calculate the percentage of customers that drive 0-200 miles/month, 200-400 miles/month, etc. We can take these percentages and multiply them by the number of suspensions to find the number of items that have been accumulating usage values in these ranges.&lt;br /&gt;
&lt;br /&gt;
To proceed with applying a usage distribution, the usage distribution is divided into increments based on a specified interval width denoted as &amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;. The usage distribution, &amp;lt;math&amp;gt;Q\,\!&amp;lt;/math&amp;gt;, is divided into intervals of &amp;lt;math&amp;gt;0+Z\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Z+Z\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;2Z+Z\,\!&amp;lt;/math&amp;gt;, etc., or &amp;lt;math&amp;gt;{{x}_{i}}={{x}_{i-1}}+Z\,\!&amp;lt;/math&amp;gt;, as shown in the next figure.&lt;br /&gt;
&lt;br /&gt;
[[Image:Usage pdf Plot.png|center|250px| ]] &lt;br /&gt;
&lt;br /&gt;
The interval width should be selected such that it creates segments that are large enough to contain adequate numbers of suspensions within the intervals. The percentage of suspensions that belong to each usage interval is calculated as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
F({{x}_{i}})=Q({{x}_{i}})-Q({{x}_{i}}-1)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Q()\,\!&amp;lt;/math&amp;gt; is the usage distribution Cumulative Density Function, &#039;&#039;cdf&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; represents the intervals used in apportioning the suspended population.&lt;br /&gt;
&lt;br /&gt;
A suspension group is a collection of suspensions that have the same age. The percentage of suspensions can be translated to numbers of suspensions within each interval, &amp;lt;math&amp;gt;{{x}_{i}}\,\!&amp;lt;/math&amp;gt;. This is done by taking each group of suspensions and multiplying it by each &amp;lt;math&amp;gt;F({{x}_{i}})\,\!&amp;lt;/math&amp;gt;, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{N}_{1,j}}= &amp;amp; F({{x}_{1}})\times N{{S}_{j}} \\ &lt;br /&gt;
 &amp;amp; {{N}_{2,j}}= &amp;amp; F({{x}_{2}})\times N{{S}_{j}} \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; ... \\ &lt;br /&gt;
 &amp;amp; {{N}_{n,j}}= &amp;amp; F({{x}_{n}})\times N{{S}_{j}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{N}_{n,j}}\,\!&amp;lt;/math&amp;gt; is the number of suspensions that belong to each interval.&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;N{{S}_{j}}\,\!&amp;lt;/math&amp;gt; is the jth group of suspensions from the data set.&lt;br /&gt;
&lt;br /&gt;
This is repeated for all the groups of suspensions.&lt;br /&gt;
&lt;br /&gt;
The age of the suspensions is calculated by subtracting the Date In-Service ( &amp;lt;math&amp;gt;DIS\,\!&amp;lt;/math&amp;gt; ), which is the date at which the unit started operation, from the end of observation period date or End Date ( &amp;lt;math&amp;gt;ED\,\!&amp;lt;/math&amp;gt; ). This is the Time In-Service ( &amp;lt;math&amp;gt;TIS\,\!&amp;lt;/math&amp;gt; ) value that describes the age of the surviving unit.&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
TIS=ED-DIS&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &amp;lt;math&amp;gt;TIS\,\!&amp;lt;/math&amp;gt; is in the same time units as the period in which the usage distribution is defined.&lt;br /&gt;
&lt;br /&gt;
For each &amp;lt;math&amp;gt;{{N}_{k,j}}\,\!&amp;lt;/math&amp;gt;, the usage is calculated as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Uk,j=xi\times TISj\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After this step, the usage of each suspension group is estimated. This data can be combined with the failures data set, and a failure distribution can be fitted to the combined data.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
{{:Warranty_Analysis_Usage_Format_Example}}&lt;br /&gt;
&lt;br /&gt;
To illustrate the calculations behind the results of this example, consider the 9 units that went in service on December 2009. 1 unit failed from that group; therefore, 8 suspensions have survived from December 2009 until the beginning of December 2010, a total of 12 months. The calculations are summarized as follows.&lt;br /&gt;
&lt;br /&gt;
[[Image:Usage Suspension Allocation.PNG|center|500px| ]] &lt;br /&gt;
&lt;br /&gt;
The two columns on the right constitute the calculated suspension data (number of suspensions and their usage) for the group. The calculation is then repeated for each of the remaining groups in the data set. These data are then combined with the data about the failures to form the life data set that is used to estimate the failure distribution model.&lt;br /&gt;
&lt;br /&gt;
==Warranty Prediction==&lt;br /&gt;
Once a life data analysis has been performed on warranty data, this information can be used to predict how many warranty returns there will be in subsequent time periods. This methodology uses the concept of conditional reliability (see [[Basic Statistical Background]]) to calculate the probability of failure for the remaining units for each shipment time period. This conditional probability of failure is then multiplied by the number of units at risk from that particular shipment period that are still in the field (i.e., the suspensions) in order to predict the number of failures or warranty returns expected for this time period. The next example illustrates this.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
&lt;br /&gt;
Using the data in the following table, predict the number of warranty returns for October for each of the three shipment periods. Use the following Weibull parameters, beta = 2.4928 and eta = 6.6951. &lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;| ||colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|RETURNS&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot; style=&amp;quot;text-align:right;&amp;quot;|SHIP||Jul. 2010||Aug. 2010||Sep. 2010&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|Jun. 2010||100||3||3||5&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|Jul. 2010||140||-||2||4&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|Aug. 2010||150||-||-||4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the Weibull parameter estimates to determine the conditional probability of failure for each shipment time period, and then multiply that probability with the number of units that are at risk for that period as follows. The equation for the conditional probability of failure is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Q(t|T)=1-R(t|T)=1-\frac{R(T+t)}{R(T)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the June shipment, there are 89 units that have successfully operated until the end of September ( &amp;lt;math&amp;gt;T=3 months)\,\!&amp;lt;/math&amp;gt;. The probability of one of these units failing in the next month ( &amp;lt;math&amp;gt;t=1 month)\,\!&amp;lt;/math&amp;gt; is then given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Q(1|3)=1-\frac{R(4)}{R(3)}=1-\frac{{{e}^{-{{\left( \tfrac{4}{6.70} \right)}^{2.49}}}}}{{{e}^{-{{\left( \tfrac{3}{6.70} \right)}^{2.49}}}}}=1-\frac{0.7582}{0.8735}=0.132\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the probability of failure for an additional month of operation is determined, the expected number of failed units during the next month, from the June shipment, is the product of this probability and the number of units at risk ( &amp;lt;math&amp;gt;{{S}_{JUN,3}}=89)\,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{F}}_{JUN,4}}=89\cdot 0.132=11.748\text{, or 12 units}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is then repeated for the July shipment, where there were 134 units operating at the end of September, with an exposure time of two months.  The probability of failure in the next month is: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Q(1|2)=1-\frac{R(3)}{R(2)}=1-\frac{0.8735}{0.9519}=0.0824\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This value is multiplied by &amp;lt;math&amp;gt;{{S}_{JUL,2}}=134\,\!&amp;lt;/math&amp;gt; to determine the number of failures, or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{F}}_{JUL,3}}=134\cdot 0.0824=11.035\text{, or 11 units}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the August shipment, there were 146 units operating at the end of September, with an exposure time of one month. The probability of failure in the next month is: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Q(1|1)=1-\frac{R(2)}{R(1)}=1-\frac{0.9519}{0.9913}=0.0397\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This value is multiplied by &amp;lt;math&amp;gt;{{S}_{AUG,1}}=146\,\!&amp;lt;/math&amp;gt; to determine the number of failures, or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{F}}_{AUG,2}}=146\cdot 0.0397=5.796\text{, or 6 units}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, the total expected returns from all shipments for the next month is the sum of the above, or 29 units.  This method can be easily repeated for different future sales periods, and utilizing projected shipments. If the user lists the number of units that are expected be sold or shipped during future periods, then these units are added to the number of units at risk whenever they are introduced into the field. The &#039;&#039;&#039;Generate Forecast&#039;&#039;&#039; functionality in the Weibull++ warranty analysis folio can automate this process for you.&lt;br /&gt;
&lt;br /&gt;
==Non-Homogeneous Warranty Data==&lt;br /&gt;
In the previous sections and examples, it is important to note that the underlying assumption was that the population was homogeneous. In other words, all sold and returned units were exactly the same (i.e., the same population with no design changes and/or modifications). In many situations, as the product matures, design changes are made to enhance and/or improve the reliability of the product.  Obviously, an improved product will exhibit different failure characteristics than its predecessor. To analyze such cases, where the population is non-homogeneous, one needs to extract each homogenous group, fit a life model to each group and then project the expected returns for each group based on the number of units at risk for each specific group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using Subset IDs in Weibull++&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Weibull++ includes an optional Subset ID column that allows to differentiate between product versions or different designs (lots). Based on the entries, the software will separately analyze (i.e., obtain parameters and failure projections for) each subset of data. Note that it is important to realize that the same limitations with regards to the number of failures that are needed are also applicable here.  In other words, distributions can be automatically fitted to lots that have return (failure) data, whereas if no returns have been experienced yet (either because the units are going to be introduced in the future or because no failures happened yet), the user will be asked to specify the parameters, since they can not be computed. Consequently, subsequent estimation/predictions related to these lots would be based on the user specified parameters. The following example illustrates the use of Subset IDs.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
{{:Warranty Analysis Non-Homogeneous Data Example}}&lt;br /&gt;
&lt;br /&gt;
==Monitoring Warranty Returns Using Statistical Process Control (SPC)==&lt;br /&gt;
By monitoring and analyzing warranty return data, one can detect specific return periods and/or batches of sales or shipments that may deviate (differ) from the assumed model. This provides the analyst (and the organization) the advantage of early notification of possible deviations in manufacturing, use conditions and/or any other factor that may adversely affect the reliability of the fielded product. Obviously, the motivation for performing such analysis is to allow for faster intervention to avoid increased costs due to increased warranty returns or more serious repercussions.  Additionally, this analysis can also be used to uncover different sub-populations that may exist  within the population.&lt;br /&gt;
&lt;br /&gt;
===Basic Analysis Method===&lt;br /&gt;
&lt;br /&gt;
For each  sales period &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; and return period &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;, the prediction error can be calculated as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{e}_{i,j}}={{\hat{F}}_{i,j}}-{{F}_{i,j}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\hat{F}}_{i,j}}\,\!&amp;lt;/math&amp;gt; is the estimated number of failures based on the estimated distribution parameters for the sales period &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; and the return period &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;, which is calculated using the equation for the conditional probability, and &amp;lt;math&amp;gt;{{F}_{i,j}}\,\!&amp;lt;/math&amp;gt; is the actual number of failure for the sales period &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; and the return period &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Since we are assuming that the model is accurate, &amp;lt;math&amp;gt;{{e}_{i,j}}\,\!&amp;lt;/math&amp;gt; should follow a normal distribution with mean value of zero and a standard deviation &amp;lt;math&amp;gt;s\,\!&amp;lt;/math&amp;gt;, where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\bar{e}}_{i,j}}=\frac{\underset{i}{\mathop{\sum }}\,\underset{j}{\mathop{\sum }}\,{{e}_{i,j}}}{n}=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the total number of return data (total number of residuals). &lt;br /&gt;
&lt;br /&gt;
The estimated standard deviation of the prediction errors can then be calculated by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;s=\sqrt{\frac{1}{n-1}\underset{i}{\mathop \sum }\,\underset{j}{\mathop \sum }\,e_{i,j}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and &amp;lt;math&amp;gt;{{e}_{i,j}}\,\!&amp;lt;/math&amp;gt; can be normalized as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{z}_{i,j}}=\frac{{{e}_{i,j}}}{s}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{z}_{i,j}}\,\!&amp;lt;/math&amp;gt; is the standardized error. &amp;lt;math&amp;gt;{{z}_{i,j}}\,\!&amp;lt;/math&amp;gt; follows a normal distribution with &amp;lt;math&amp;gt;\mu =0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma =1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is known that the square of a random variable with standard normal distribution follows the &amp;lt;math&amp;gt;{{\chi }^{2}}\,\!&amp;lt;/math&amp;gt; (Chi Square) distribution with 1 degree of freedom and that the sum of the squares of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; random variables with standard normal distribution follows the &amp;lt;math&amp;gt;{{\chi }^{2}}\,\!&amp;lt;/math&amp;gt; distribution with &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; degrees of freedom.  This then can be used to help detect the abnormal returns for a given sales period, return period or just a specific cell (combination of a return and a sales period).&lt;br /&gt;
&lt;br /&gt;
:*	For a cell, abnormality is detected if &amp;lt;math&amp;gt;z_{i,j}^{2}=\chi _{1}^{2}\ge \chi _{1,\alpha }^{2}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
:*	For an entire sales period &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, abnormality is detected if &amp;lt;math&amp;gt;\underset{j}{\mathop{\sum }}\,z_{i,j}^{2}=\chi _{J}^{2}\ge \chi _{\alpha ,J}^{2},\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;J\,\!&amp;lt;/math&amp;gt; is the total number of return period for a sales period &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*	For an entire return period &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;, abnormality is detected if &amp;lt;math&amp;gt;\underset{i}{\mathop{\sum }}\,z_{i,j}^{2}=\chi _{I}^{2}\ge \chi _{\alpha ,I}^{2},\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;I\,\!&amp;lt;/math&amp;gt; is the total number of sales period for a return period &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Here &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt; is the criticality value of the &amp;lt;math&amp;gt;{{\chi }^{2}}\,\!&amp;lt;/math&amp;gt; distribution, which can be set at critical value or caution value. It describes the level of sensitivity to outliers (returns that deviate significantly from the predictions based on the fitted model). Increasing the value of  &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt; increases the power of detection, but this could lead to more false alarms.&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
&#039;&#039;&#039;Example Using SPC for Warranty Analysis Data&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Using the data from the following table, the expected returns for each sales period can be obtained using conditional reliability concepts, as given in the conditional probability equation. &lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;| ||colspan=&amp;quot;3&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|RETURNS&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot; style=&amp;quot;text-align:right;&amp;quot;|SHIP||Jul. 2010||Aug. 2010||Sep. 2010&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|Jun. 2010||100||3||3||5&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|Jul. 2010||140||-||2||4&lt;br /&gt;
|-align=&amp;quot;center&amp;quot;&lt;br /&gt;
|Aug. 2010||150||-||-||4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
For example, for the month of September, the expected return number from the June shipment is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\hat{F}}_{Jun,3}}=(100-6)\cdot \left( 1-\frac{R(3)}{R(2)} \right)=94\cdot 0.08239=7.7447\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The actual number of returns during this period is five; thus, the prediction error for this period is: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{e}_{Jun,3}}={{\hat{F}}_{Jun,3}}-{{F}_{Jun,3}}=7.7447-5=2.7447.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can then be repeated for each cell, yielding the following table for &amp;lt;math&amp;gt;{{e}_{i,j}}\,\!&amp;lt;/math&amp;gt; : &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; RETURNS &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   {} &amp;amp; SHIP  &amp;amp; \text{Jul}\text{. 2005} &amp;amp; \text{Aug}\text{. 2005} &amp;amp; \text{Sep}\text{. 2005}  \\&lt;br /&gt;
   \text{Jun}\text{. 2005} &amp;amp; \text{100} &amp;amp; \text{-2}\text{.1297} &amp;amp; \text{0}\text{.8462} &amp;amp; \text{2}\text{.7447}  \\&lt;br /&gt;
   \text{Jul}\text{. 2005} &amp;amp; \text{140} &amp;amp; \text{-} &amp;amp; \text{-0}\text{.7816} &amp;amp; \text{1}\text{.4719}  \\&lt;br /&gt;
   \text{Aug}\text{. 2005} &amp;amp; \text{150} &amp;amp; \text{-} &amp;amp; \text{-} &amp;amp; \text{-2}\text{.6946}  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, for this example, &amp;lt;math&amp;gt;n=6\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\bar{e}}_{i,j}}=-0.5432\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;s=1.6890.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Thus the &amp;lt;math&amp;gt;z_{i,j}\,\!&amp;lt;/math&amp;gt; values are: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; RETURNS &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   {} &amp;amp; SHIP  &amp;amp; \text{Jul}\text{. 2005} &amp;amp; \text{Aug}\text{. 2005} &amp;amp; \text{Sep}\text{. 2005}  \\&lt;br /&gt;
   \text{Jun}\text{. 2005} &amp;amp; \text{100} &amp;amp; \text{-0}\text{.9968} &amp;amp; \text{0}\text{.3960} &amp;amp; \text{1}\text{.2846}  \\&lt;br /&gt;
   \text{Jul}\text{. 2005} &amp;amp; \text{140} &amp;amp; \text{-} &amp;amp; \text{-0}\text{.3658} &amp;amp; \text{0}\text{.6889}  \\&lt;br /&gt;
   \text{Aug}\text{. 2005} &amp;amp; \text{150} &amp;amp; \text{-} &amp;amp; \text{-} &amp;amp; \text{-1}\text{.2612}  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;z_{i,j}^{2}\,\!&amp;lt;/math&amp;gt; values, for each cell, are given in the following table. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   {} &amp;amp; {} &amp;amp; RETURNS &amp;amp; {} &amp;amp; {} &amp;amp; {}  \\&lt;br /&gt;
   {} &amp;amp; SHIP  &amp;amp; \text{Jul}\text{. 2005} &amp;amp; \text{Aug}\text{. 2005} &amp;amp; \text{Sep}\text{. 2005} &amp;amp; \text{Sum}  \\&lt;br /&gt;
   \text{Jun}\text{. 2005} &amp;amp; \text{100} &amp;amp; \text{0}\text{.9936} &amp;amp; \text{0}\text{.1569} &amp;amp; \text{1}\text{.6505} &amp;amp; 2.8010  \\&lt;br /&gt;
   \text{Jul}\text{. 2005} &amp;amp; \text{140} &amp;amp; \text{-} &amp;amp; \text{0}\text{.1338} &amp;amp; \text{0}\text{.4747} &amp;amp; 0.6085  \\&lt;br /&gt;
   \text{Aug}\text{. 2005} &amp;amp; \text{150} &amp;amp; \text{-} &amp;amp; \text{-} &amp;amp; \text{1}\text{.5905} &amp;amp; 1.5905  \\&lt;br /&gt;
   \text{Sum} &amp;amp; {} &amp;amp; 0.9936 &amp;amp; 0.2907 &amp;amp; 3.7157 &amp;amp; {}  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the critical value is set at &amp;lt;math&amp;gt;\alpha = 0.01\,\!&amp;lt;/math&amp;gt; and the caution value is set at &amp;lt;math&amp;gt;\alpha = 0.1\,\!&amp;lt;/math&amp;gt;, then the critical and caution &amp;lt;math&amp;gt;{{\chi }^{2}}\,\!&amp;lt;/math&amp;gt; values will be: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{matrix}&lt;br /&gt;
   {} &amp;amp; &amp;amp; Degree of Freedom \\&lt;br /&gt;
   {} &amp;amp; \text{1} &amp;amp; \text{2} &amp;amp; \text{3}  \\&lt;br /&gt;
   {{\chi}^{2}\text{Critical}} &amp;amp; \text{6.6349} &amp;amp; \text{9.2103} &amp;amp; \text{11.3449}   \\&lt;br /&gt;
   {{\chi}^{2}\text{Caution}} &amp;amp; \text{2,7055} &amp;amp; \text{4.6052} &amp;amp; \text{6.2514}  \\&lt;br /&gt;
\end{matrix}\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we consider the sales periods as the basis for outlier detection, then after comparing the above table to the sum of &amp;lt;math&amp;gt;z_{i,j}^{2}\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;(\chi _{1}^{2})\,\!&amp;lt;/math&amp;gt; values for each sales period, we find that all the sales values do not exceed the critical and caution limits. For example, the total &amp;lt;math&amp;gt;{{\chi }^{2}}\,\!&amp;lt;/math&amp;gt; value of the sale month of July is 0.6085. Its degrees of freedom is 2, so the corresponding caution and critical values are 4.6052 and 9.2103 respectively. Both values are larger than 0.6085, so the return numbers of the July sales period do not deviate (based on the chosen significance) from the model&#039;s predictions.&lt;br /&gt;
&lt;br /&gt;
If we consider returns periods as the basis for outliers detection, then after comparing the above table to the sum of &amp;lt;math&amp;gt;z_{i,j}^{2}\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;(\chi _{1}^{2})\,\!&amp;lt;/math&amp;gt;  values for each return period, we find that all the return values do not exceed the critical and caution limits. For example, the total &amp;lt;math&amp;gt;{{\chi }^{2}}\,\!&amp;lt;/math&amp;gt; value of the sale month of August is 3.7157. Its degree of freedom is 3, so the corresponding caution and critical values are 6.2514 and 11.3449 respectively.  Both values are larger than 3.7157, so the return numbers for the June return period do not deviate from the model&#039;s predictions.&lt;br /&gt;
&lt;br /&gt;
This analysis can be automatically performed in Weibull++ by entering the alpha values in the Statistical Process Control page of the control panel and selecting which period to color code, as shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:Warranty Example 5 SPC settings.png|center|250px| ]] &lt;br /&gt;
&lt;br /&gt;
To view the table of chi-squared values ( &amp;lt;math&amp;gt;z_{i,j}^{2}\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\chi _{1}^{2}\,\!&amp;lt;/math&amp;gt; values), click the &#039;&#039;&#039;Show Results (...)&#039;&#039;&#039; button. &lt;br /&gt;
&lt;br /&gt;
[[Image:Warranty Example 5 Chi-square.png|center|450px| ]] &lt;br /&gt;
&lt;br /&gt;
Weibull++ automatically color codes SPC results for easy visualization in the returns data sheet. By default, the green color means that the return number is normal; the yellow color indicates that the return number is larger than the caution threshold but smaller than the critical value; the red color means that the return is abnormal, meaning that the return number is either too big or too small compared to the predicted value.&lt;br /&gt;
&lt;br /&gt;
In this example, all the cells are coded in green for both analyses (i.e., by sales periods or by return periods), indicating that all returns fall within the caution and critical limits (i.e., nothing abnormal). Another way to visualize this is by using a Chi-Squared plot for the sales period and return period, as shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:Warranty Example 5 SPC Sales.png|center|450px| ]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Warranty Example 5 SPC Return.png|center|450px| ]]&lt;br /&gt;
&lt;br /&gt;
===Using Subset IDs with SPC for Warranty Data===&lt;br /&gt;
The warranty monitoring methodology explained in this section can also be used to detect different subpopulations in a data set. The different subpopulations can reflect different use conditions, different material, etc. In this methodology, one can use different subset IDs to differentiate between subpopulations, and obtain models that are distinct to each subpopulation. The following example illustrates this concept.&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
{{:Non-Homogeneous Data with Subset IDs Example}}&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Imperfect_Repairs&amp;diff=64917</id>
		<title>Imperfect Repairs</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Imperfect_Repairs&amp;diff=64917"/>
		<updated>2017-02-08T19:44:41Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Illustrating Type I RF Through an Example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner BlockSim Articles}}{{Navigation box}}&lt;br /&gt;
&#039;&#039;This article also appears in the [[Repairable_Systems_Analysis_Through_Simulation#Imperfect_Repairs|System Analysis Reference]] book.&#039;&#039;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
===Restoration Factors (RF)===&lt;br /&gt;
&amp;lt;includeonly&amp;gt;In the prior discussion it was assumed that a repaired component is as good as new after repair.  This is usually the case when replacing a component with a new one.  &amp;lt;/includeonly&amp;gt;The concept of a restoration factor may be used in cases in which one wants to model imperfect repair, or a repair with a used component.  The best way to indicate that a component is not as good as new is to give the component some age.  As an example, if one is dealing with car tires, a tire that is not as good as new would have some pre-existing wear on it.  In other words, the tire would have some accumulated mileage.  A restoration factor concept is used to better describe the existing age of a component.  The restoration factor is used to determine the age of the component after a repair or any other maintenance action (addressed in later sections, such as a PM action or inspection).&lt;br /&gt;
 &lt;br /&gt;
The restoration factor in BlockSim is defined as a number between 0 and 1 and has the following effect:&lt;br /&gt;
&lt;br /&gt;
::#A restoration factor of 1 (100%) implies that the component is as good as new after repair, which in effect implies that the starting age of the component is 0.&lt;br /&gt;
::#A restoration factor of 0 implies that the component is the same as it was prior to repair, which in effect implies that the starting age of the component is the same as the age of the component at failure.&lt;br /&gt;
::#A restoration factor of 0.25 (25%) implies that the starting age of the component is equal to 75% of the age of the component at failure.&amp;lt;br&amp;gt;&lt;br /&gt;
The figure below provides a visual demonstration of restoration factors.  It should be noted that for successive maintenance actions on the same component, the age of the component after such an action is the initial age plus the time to failure since the last maintenance action.  &lt;br /&gt;
&lt;br /&gt;
[[Image:r5_new.png|center|300px|Different restoration factors(RF).|link=]]&lt;br /&gt;
&lt;br /&gt;
===Type I and Type II RFs===&lt;br /&gt;
&lt;br /&gt;
BlockSim offers two kinds of restoration factors.  The type I restoration factor is based on Kijima [[Appendix_B:_References | [12, 13]]] model I and assumes that the repairs can only fix the wear-out and damage incurred during the last period of operation.  Thus, the nth repair can only remove the damage incurred during the time between the (n-1)th and nth failures.  The type II restoration factor, based on Kijima model II, assumes that the repairs fix all of the wear-out and damage accumulated up to the current time.  As a result, the nth repair not only removes the damage incurred during the time between the (n-1)th and nth failures, but can also fix the cumulative damage incurred during the time from the first failure to the (n-1)th failure.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.13.png|center|500px|A Repairable System Structure|link=]]&lt;br /&gt;
&lt;br /&gt;
To illustrate this, consider a repairable system, observed from time &amp;lt;math&amp;gt;t=0\,\!&amp;lt;/math&amp;gt;, as shown in figure above.  Let the successive failure times be denoted by &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{t}_{2}}\,\!&amp;lt;/math&amp;gt;, ... and let the times between failures be denoted by &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, ....  Let &amp;lt;math&amp;gt;RF\,\!&amp;lt;/math&amp;gt; denote the restoration factor, then the age of the system &amp;lt;math&amp;gt;{{v}_{n}}\,\!&amp;lt;/math&amp;gt; at time &amp;lt;math&amp;gt;{{t}_{n}}\,\!&amp;lt;/math&amp;gt; using the two types of restoration factors is:&lt;br /&gt;
&lt;br /&gt;
Type I Restoration Factor:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{v}_{n}}={{v}_{n-1}}+(1-RF){{x}_{n}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type II Restoration Factor:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{v}_{n}}=(1-RF)({{v}_{n-1}}+{{x}_{n}}) &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Illustrating Type I RF Through an Example===&lt;br /&gt;
&lt;br /&gt;
Assume that you have a component with a Weibull failure distribution (&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; = 1.5, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; = 1000 hours), RF type I = 0.25 and the component undergoes instant repair.  Furthermore, assume that the component starts life new (i.e., with a start age of zero).  The simulation steps are as follows:&lt;br /&gt;
&lt;br /&gt;
#Generate a uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.7021885\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Then, the first failure event will be at 500 hours.&amp;lt;br&amp;gt;&lt;br /&gt;
#After instantaneous repair, the component will begin life with an age after repair of 375 hours ((1 - 0.25) x 500).&amp;lt;br&amp;gt;&lt;br /&gt;
#Generate another uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.8824969\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#The next failure event is now determined using the conditional reliability equation, or: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(x+v)= &amp;amp; R(x,v)\cdot R(v) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot R(375) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot 0.7948200 \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.7014261 \\ &lt;br /&gt;
x+375= &amp;amp; 501.024 \\ &lt;br /&gt;
x = &amp;amp; 126.024  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus, the next failure event will be at 500 + 126.024 = 626.024 hours.  Note that if the component had been as good as new (i.e., RF = 100%), then the next failure would have been at 750 hours (500 + 250), where 250 hours is the time corresponding to a reliability of 0.8824969, which is the random number that was generated in Step 4.&lt;br /&gt;
&lt;br /&gt;
:6. At this failure point, the item&#039;s age will now be equal to the initial age, after the first corrective action, plus the additional time it operated, or 375 + 126.024 = 501.024 hours.&lt;br /&gt;
:7. Thus, the age after the second repair will be the sum of the previous age and the restoration factor times the age of the component since the last failure, or 375 + ((1-0.25) x 126.024) = 469.518 hours.&lt;br /&gt;
:8. Go to Step 4 and repeat the process.&lt;br /&gt;
&lt;br /&gt;
===Illustrating Type II RF Through an Example===&lt;br /&gt;
Assume that you have a component with a Weibull failure distribution (&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; = 1.5, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; = 1000 hours), RF type II = 0.25 and the component undergoes instant repair.  Furthermore, assume that the component starts life new (i.e., with a start age of zero).  The simulation steps are as follows:&lt;br /&gt;
&lt;br /&gt;
#Generate a uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.7021885\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Then, the first failure event will be at 500 hours.&amp;lt;br&amp;gt;&lt;br /&gt;
#After instantaneous repair, the component will begin life with an age after repair of 375 hours ((1 - 0.25) x 500).&amp;lt;br&amp;gt;&lt;br /&gt;
#Generate another uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.8824969\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
:#The next failure event is now determined using the conditional reliability equation, or: &lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(x+v)= &amp;amp; R(x,v)\cdot R(v) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot R(375) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot 0.7948200 \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.7014261 \\ &lt;br /&gt;
x+375= &amp;amp; 501.024 \\ &lt;br /&gt;
x= &amp;amp; 126.024  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
:Thus, the next failure event will be at 500 + 126.024 = 626.024 hours.  Note that if the component had been as good as new (i.e., RF = 100%), then the next failure would have been at 750 hours (500 + 250), where 250 hours is the time corresponding to a reliability of 0.8824969, which is the random number that was generated in Step 4.&lt;br /&gt;
&lt;br /&gt;
:6.  At this failure point, the item&#039;s age will now be equal to the initial age, after the first corrective action, plus the additional time it operated, or 375 + 126.024 = 501.024 hours.&lt;br /&gt;
:7.  Thus, the age after the second repair will be the restoration factor times the age of the component at failure, or (1 - 0.25) x (375 + 126.024) = 375.768 hours.&lt;br /&gt;
:8.  Go to Step 4 and repeat the process.&lt;br /&gt;
&lt;br /&gt;
===Discussion of Type I and Type II RFs===&lt;br /&gt;
As an application example, consider an automotive engine that fails after six years of operation.  The engine is rebuilt.  The rebuild has the effect of rejuvenating the engine to a condition as if it were three years old (i.e., a 50% RF).  Assume that the rebuild affects all of the damage on the engine (i.e., a Type II restoration).  The engine fails again after three years (when it again reaches an age of six) and another rebuild is required.  This rebuild will also rejuvenate the engine by 50%, thus making it three years old again.&lt;br /&gt;
&lt;br /&gt;
Now consider a similar engine subjected to a similar rebuild, but that the rebuild only affects the damage since the last repair (i.e., a Type I restoration of 50%).  The first rebuild will rejuvenate the engine to a three-year-old condition.  The engine will fail again after three years, but the rebuild this time will only affect the age (of three years) after the first rebuild.  Thus the engine will have an age of four and a half years after the second rebuild ( &amp;lt;math&amp;gt;3+(1-0.5) \times 3=4.5\,\!&amp;lt;/math&amp;gt; ).  After the second rebuild the engine will fail again after a period of one and a half years and a third rebuild will be required.  The age of the engine after the third rebuild will be five years and three months ( &amp;lt;math&amp;gt;4.5+(1-0.5) \times 1.5=5.25\,\!&amp;lt;/math&amp;gt; ).&lt;br /&gt;
&lt;br /&gt;
It should be pointed out that when dealing with constant failure rates (i.e., with a distribution such as the exponential), the restoration factor has no effect.&lt;br /&gt;
&lt;br /&gt;
===Calculations to Obtain RFs===&lt;br /&gt;
The two types of restoration factors discussed in the previous sections can be calculated using the parametric RDA (Recurrent Data Analysis) tool in Weibull++.  This tool uses the GRP (General Renewal Process) model to analyze failure data of a repairable item.  More information on the Parametric RDA tool and the GRP (General Renewal Process) model can be found in [[Appendix_B:_References | [25]]].  As an example, consider the times to failure for an air-conditioning unit of an aircraft recorded in the following table.  Assume that each time the unit is repaired, the repair can only remove the damage incurred during the last period of operation.  This assumption implies a type I RF factor which is specified as an analysis setting in the Weibull++ folio.  The type I RF for the air-conditioning unit can be calculated using the results from Weibull++ shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.14.png|center|727px|thumb|Using the Parametric RDA tool in Weibull++ to calculate restoration factors.|link=]]&lt;br /&gt;
&lt;br /&gt;
[[Image:8.14t.png|center|300px|link=]]&lt;br /&gt;
 &lt;br /&gt;
The value of the action effectiveness factor &amp;lt;math&amp;gt;q\,\!&amp;lt;/math&amp;gt; obtained from Weibull++ is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
q=0.1344 &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The type I RF factor is calculated using &amp;lt;math&amp;gt;q\,\!&amp;lt;/math&amp;gt; as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
RF= &amp;amp; 1-q \\ &lt;br /&gt;
= &amp;amp; 1-0.1344 \\ &lt;br /&gt;
= &amp;amp; 0.8656  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
The parameters of the Weibull distribution for the air-conditioning unit can also be calculated. &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is obtained from Weibull++ as 1.1976. &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; can be calculated using the &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values from Weibull++ as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\eta = &amp;amp; {{\left( \frac{1}{\lambda } \right)}^{\tfrac{1}{\beta }}} \\ &lt;br /&gt;
= &amp;amp; {{\left( \frac{1}{0.0049} \right)}^{\tfrac{1}{1.1976}}} \\ &lt;br /&gt;
= &amp;amp; 84.8582  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The values of the type I RF, &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; calculated above can now be used to model the air-conditioning unit as a component in BlockSim.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Imperfect_Repairs&amp;diff=64916</id>
		<title>Imperfect Repairs</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Imperfect_Repairs&amp;diff=64916"/>
		<updated>2017-02-08T19:43:35Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Illustrating Type II RF Through an Example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner BlockSim Articles}}{{Navigation box}}&lt;br /&gt;
&#039;&#039;This article also appears in the [[Repairable_Systems_Analysis_Through_Simulation#Imperfect_Repairs|System Analysis Reference]] book.&#039;&#039;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
===Restoration Factors (RF)===&lt;br /&gt;
&amp;lt;includeonly&amp;gt;In the prior discussion it was assumed that a repaired component is as good as new after repair.  This is usually the case when replacing a component with a new one.  &amp;lt;/includeonly&amp;gt;The concept of a restoration factor may be used in cases in which one wants to model imperfect repair, or a repair with a used component.  The best way to indicate that a component is not as good as new is to give the component some age.  As an example, if one is dealing with car tires, a tire that is not as good as new would have some pre-existing wear on it.  In other words, the tire would have some accumulated mileage.  A restoration factor concept is used to better describe the existing age of a component.  The restoration factor is used to determine the age of the component after a repair or any other maintenance action (addressed in later sections, such as a PM action or inspection).&lt;br /&gt;
 &lt;br /&gt;
The restoration factor in BlockSim is defined as a number between 0 and 1 and has the following effect:&lt;br /&gt;
&lt;br /&gt;
::#A restoration factor of 1 (100%) implies that the component is as good as new after repair, which in effect implies that the starting age of the component is 0.&lt;br /&gt;
::#A restoration factor of 0 implies that the component is the same as it was prior to repair, which in effect implies that the starting age of the component is the same as the age of the component at failure.&lt;br /&gt;
::#A restoration factor of 0.25 (25%) implies that the starting age of the component is equal to 75% of the age of the component at failure.&amp;lt;br&amp;gt;&lt;br /&gt;
The figure below provides a visual demonstration of restoration factors.  It should be noted that for successive maintenance actions on the same component, the age of the component after such an action is the initial age plus the time to failure since the last maintenance action.  &lt;br /&gt;
&lt;br /&gt;
[[Image:r5_new.png|center|300px|Different restoration factors(RF).|link=]]&lt;br /&gt;
&lt;br /&gt;
===Type I and Type II RFs===&lt;br /&gt;
&lt;br /&gt;
BlockSim offers two kinds of restoration factors.  The type I restoration factor is based on Kijima [[Appendix_B:_References | [12, 13]]] model I and assumes that the repairs can only fix the wear-out and damage incurred during the last period of operation.  Thus, the nth repair can only remove the damage incurred during the time between the (n-1)th and nth failures.  The type II restoration factor, based on Kijima model II, assumes that the repairs fix all of the wear-out and damage accumulated up to the current time.  As a result, the nth repair not only removes the damage incurred during the time between the (n-1)th and nth failures, but can also fix the cumulative damage incurred during the time from the first failure to the (n-1)th failure.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.13.png|center|500px|A Repairable System Structure|link=]]&lt;br /&gt;
&lt;br /&gt;
To illustrate this, consider a repairable system, observed from time &amp;lt;math&amp;gt;t=0\,\!&amp;lt;/math&amp;gt;, as shown in figure above.  Let the successive failure times be denoted by &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{t}_{2}}\,\!&amp;lt;/math&amp;gt;, ... and let the times between failures be denoted by &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, ....  Let &amp;lt;math&amp;gt;RF\,\!&amp;lt;/math&amp;gt; denote the restoration factor, then the age of the system &amp;lt;math&amp;gt;{{v}_{n}}\,\!&amp;lt;/math&amp;gt; at time &amp;lt;math&amp;gt;{{t}_{n}}\,\!&amp;lt;/math&amp;gt; using the two types of restoration factors is:&lt;br /&gt;
&lt;br /&gt;
Type I Restoration Factor:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{v}_{n}}={{v}_{n-1}}+(1-RF){{x}_{n}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type II Restoration Factor:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{v}_{n}}=(1-RF)({{v}_{n-1}}+{{x}_{n}}) &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Illustrating Type I RF Through an Example===&lt;br /&gt;
&lt;br /&gt;
Assume that you have a component with a Weibull failure distribution (&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; = 1.5, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; = 1000 hours), RF type I = 0.25 and the component undergoes instant repair.  Furthermore, assume that the component starts life new (i.e., with a start age of zero).  The simulation steps are as follows:&lt;br /&gt;
&lt;br /&gt;
#Generate a uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.7021885\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Then, the first failure event will be at 500 hours.  &amp;lt;br&amp;gt;&lt;br /&gt;
#After instantaneous repair, the component will begin life with an age after repair of 375 hours &amp;lt;math&amp;gt;((1-0.25) \times 500)\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Generate another uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.8824969\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#The next failure event is now determined using the conditional reliability equation, or: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(x+v)= &amp;amp; R(x,v)\cdot R(v) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot R(375) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot 0.7948200 \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.7014261 \\ &lt;br /&gt;
x+375= &amp;amp; 501.024 \\ &lt;br /&gt;
x = &amp;amp; 126.024  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus, the next failure event will be at 500 + 126.024 = 626.024 hours.  Note that if the component had been as good as new (i.e., RF = 100%), then the next failure would have been at 750 hours (500 + 250), where 250 hours is the time corresponding to a reliability of 0.8824969, which is the random number that was generated in Step 4.&lt;br /&gt;
&lt;br /&gt;
:6. At this failure point, the item&#039;s age will now be equal to the initial age, after the first corrective action, plus the additional time it operated, or 375 + 126.024 = 501.024 hours.&lt;br /&gt;
:7. Thus, the age after the second repair will be the sum of the previous age and the restoration factor times the age of the component since the last failure, or 375 + ((1-0.25) x 126.024) = 469.518 hours.&lt;br /&gt;
:8. Go to Step 4 and repeat the process.&lt;br /&gt;
&lt;br /&gt;
===Illustrating Type II RF Through an Example===&lt;br /&gt;
Assume that you have a component with a Weibull failure distribution (&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; = 1.5, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; = 1000 hours), RF type II = 0.25 and the component undergoes instant repair.  Furthermore, assume that the component starts life new (i.e., with a start age of zero).  The simulation steps are as follows:&lt;br /&gt;
&lt;br /&gt;
#Generate a uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.7021885\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Then, the first failure event will be at 500 hours.&amp;lt;br&amp;gt;&lt;br /&gt;
#After instantaneous repair, the component will begin life with an age after repair of 375 hours ((1 - 0.25) x 500).&amp;lt;br&amp;gt;&lt;br /&gt;
#Generate another uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.8824969\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
:#The next failure event is now determined using the conditional reliability equation, or: &lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(x+v)= &amp;amp; R(x,v)\cdot R(v) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot R(375) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot 0.7948200 \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.7014261 \\ &lt;br /&gt;
x+375= &amp;amp; 501.024 \\ &lt;br /&gt;
x= &amp;amp; 126.024  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
:Thus, the next failure event will be at 500 + 126.024 = 626.024 hours.  Note that if the component had been as good as new (i.e., RF = 100%), then the next failure would have been at 750 hours (500 + 250), where 250 hours is the time corresponding to a reliability of 0.8824969, which is the random number that was generated in Step 4.&lt;br /&gt;
&lt;br /&gt;
:6.  At this failure point, the item&#039;s age will now be equal to the initial age, after the first corrective action, plus the additional time it operated, or 375 + 126.024 = 501.024 hours.&lt;br /&gt;
:7.  Thus, the age after the second repair will be the restoration factor times the age of the component at failure, or (1 - 0.25) x (375 + 126.024) = 375.768 hours.&lt;br /&gt;
:8.  Go to Step 4 and repeat the process.&lt;br /&gt;
&lt;br /&gt;
===Discussion of Type I and Type II RFs===&lt;br /&gt;
As an application example, consider an automotive engine that fails after six years of operation.  The engine is rebuilt.  The rebuild has the effect of rejuvenating the engine to a condition as if it were three years old (i.e., a 50% RF).  Assume that the rebuild affects all of the damage on the engine (i.e., a Type II restoration).  The engine fails again after three years (when it again reaches an age of six) and another rebuild is required.  This rebuild will also rejuvenate the engine by 50%, thus making it three years old again.&lt;br /&gt;
&lt;br /&gt;
Now consider a similar engine subjected to a similar rebuild, but that the rebuild only affects the damage since the last repair (i.e., a Type I restoration of 50%).  The first rebuild will rejuvenate the engine to a three-year-old condition.  The engine will fail again after three years, but the rebuild this time will only affect the age (of three years) after the first rebuild.  Thus the engine will have an age of four and a half years after the second rebuild ( &amp;lt;math&amp;gt;3+(1-0.5) \times 3=4.5\,\!&amp;lt;/math&amp;gt; ).  After the second rebuild the engine will fail again after a period of one and a half years and a third rebuild will be required.  The age of the engine after the third rebuild will be five years and three months ( &amp;lt;math&amp;gt;4.5+(1-0.5) \times 1.5=5.25\,\!&amp;lt;/math&amp;gt; ).&lt;br /&gt;
&lt;br /&gt;
It should be pointed out that when dealing with constant failure rates (i.e., with a distribution such as the exponential), the restoration factor has no effect.&lt;br /&gt;
&lt;br /&gt;
===Calculations to Obtain RFs===&lt;br /&gt;
The two types of restoration factors discussed in the previous sections can be calculated using the parametric RDA (Recurrent Data Analysis) tool in Weibull++.  This tool uses the GRP (General Renewal Process) model to analyze failure data of a repairable item.  More information on the Parametric RDA tool and the GRP (General Renewal Process) model can be found in [[Appendix_B:_References | [25]]].  As an example, consider the times to failure for an air-conditioning unit of an aircraft recorded in the following table.  Assume that each time the unit is repaired, the repair can only remove the damage incurred during the last period of operation.  This assumption implies a type I RF factor which is specified as an analysis setting in the Weibull++ folio.  The type I RF for the air-conditioning unit can be calculated using the results from Weibull++ shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.14.png|center|727px|thumb|Using the Parametric RDA tool in Weibull++ to calculate restoration factors.|link=]]&lt;br /&gt;
&lt;br /&gt;
[[Image:8.14t.png|center|300px|link=]]&lt;br /&gt;
 &lt;br /&gt;
The value of the action effectiveness factor &amp;lt;math&amp;gt;q\,\!&amp;lt;/math&amp;gt; obtained from Weibull++ is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
q=0.1344 &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The type I RF factor is calculated using &amp;lt;math&amp;gt;q\,\!&amp;lt;/math&amp;gt; as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
RF= &amp;amp; 1-q \\ &lt;br /&gt;
= &amp;amp; 1-0.1344 \\ &lt;br /&gt;
= &amp;amp; 0.8656  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
The parameters of the Weibull distribution for the air-conditioning unit can also be calculated. &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is obtained from Weibull++ as 1.1976. &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; can be calculated using the &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values from Weibull++ as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\eta = &amp;amp; {{\left( \frac{1}{\lambda } \right)}^{\tfrac{1}{\beta }}} \\ &lt;br /&gt;
= &amp;amp; {{\left( \frac{1}{0.0049} \right)}^{\tfrac{1}{1.1976}}} \\ &lt;br /&gt;
= &amp;amp; 84.8582  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The values of the type I RF, &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; calculated above can now be used to model the air-conditioning unit as a component in BlockSim.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Imperfect_Repairs&amp;diff=64915</id>
		<title>Imperfect Repairs</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Imperfect_Repairs&amp;diff=64915"/>
		<updated>2017-02-08T19:43:17Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Illustrating Type II RF Through an Example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner BlockSim Articles}}{{Navigation box}}&lt;br /&gt;
&#039;&#039;This article also appears in the [[Repairable_Systems_Analysis_Through_Simulation#Imperfect_Repairs|System Analysis Reference]] book.&#039;&#039;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
===Restoration Factors (RF)===&lt;br /&gt;
&amp;lt;includeonly&amp;gt;In the prior discussion it was assumed that a repaired component is as good as new after repair.  This is usually the case when replacing a component with a new one.  &amp;lt;/includeonly&amp;gt;The concept of a restoration factor may be used in cases in which one wants to model imperfect repair, or a repair with a used component.  The best way to indicate that a component is not as good as new is to give the component some age.  As an example, if one is dealing with car tires, a tire that is not as good as new would have some pre-existing wear on it.  In other words, the tire would have some accumulated mileage.  A restoration factor concept is used to better describe the existing age of a component.  The restoration factor is used to determine the age of the component after a repair or any other maintenance action (addressed in later sections, such as a PM action or inspection).&lt;br /&gt;
 &lt;br /&gt;
The restoration factor in BlockSim is defined as a number between 0 and 1 and has the following effect:&lt;br /&gt;
&lt;br /&gt;
::#A restoration factor of 1 (100%) implies that the component is as good as new after repair, which in effect implies that the starting age of the component is 0.&lt;br /&gt;
::#A restoration factor of 0 implies that the component is the same as it was prior to repair, which in effect implies that the starting age of the component is the same as the age of the component at failure.&lt;br /&gt;
::#A restoration factor of 0.25 (25%) implies that the starting age of the component is equal to 75% of the age of the component at failure.&amp;lt;br&amp;gt;&lt;br /&gt;
The figure below provides a visual demonstration of restoration factors.  It should be noted that for successive maintenance actions on the same component, the age of the component after such an action is the initial age plus the time to failure since the last maintenance action.  &lt;br /&gt;
&lt;br /&gt;
[[Image:r5_new.png|center|300px|Different restoration factors(RF).|link=]]&lt;br /&gt;
&lt;br /&gt;
===Type I and Type II RFs===&lt;br /&gt;
&lt;br /&gt;
BlockSim offers two kinds of restoration factors.  The type I restoration factor is based on Kijima [[Appendix_B:_References | [12, 13]]] model I and assumes that the repairs can only fix the wear-out and damage incurred during the last period of operation.  Thus, the nth repair can only remove the damage incurred during the time between the (n-1)th and nth failures.  The type II restoration factor, based on Kijima model II, assumes that the repairs fix all of the wear-out and damage accumulated up to the current time.  As a result, the nth repair not only removes the damage incurred during the time between the (n-1)th and nth failures, but can also fix the cumulative damage incurred during the time from the first failure to the (n-1)th failure.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.13.png|center|500px|A Repairable System Structure|link=]]&lt;br /&gt;
&lt;br /&gt;
To illustrate this, consider a repairable system, observed from time &amp;lt;math&amp;gt;t=0\,\!&amp;lt;/math&amp;gt;, as shown in figure above.  Let the successive failure times be denoted by &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{t}_{2}}\,\!&amp;lt;/math&amp;gt;, ... and let the times between failures be denoted by &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, ....  Let &amp;lt;math&amp;gt;RF\,\!&amp;lt;/math&amp;gt; denote the restoration factor, then the age of the system &amp;lt;math&amp;gt;{{v}_{n}}\,\!&amp;lt;/math&amp;gt; at time &amp;lt;math&amp;gt;{{t}_{n}}\,\!&amp;lt;/math&amp;gt; using the two types of restoration factors is:&lt;br /&gt;
&lt;br /&gt;
Type I Restoration Factor:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{v}_{n}}={{v}_{n-1}}+(1-RF){{x}_{n}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type II Restoration Factor:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{v}_{n}}=(1-RF)({{v}_{n-1}}+{{x}_{n}}) &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Illustrating Type I RF Through an Example===&lt;br /&gt;
&lt;br /&gt;
Assume that you have a component with a Weibull failure distribution (&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; = 1.5, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; = 1000 hours), RF type I = 0.25 and the component undergoes instant repair.  Furthermore, assume that the component starts life new (i.e., with a start age of zero).  The simulation steps are as follows:&lt;br /&gt;
&lt;br /&gt;
#Generate a uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.7021885\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Then, the first failure event will be at 500 hours.  &amp;lt;br&amp;gt;&lt;br /&gt;
#After instantaneous repair, the component will begin life with an age after repair of 375 hours &amp;lt;math&amp;gt;((1-0.25) \times 500)\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Generate another uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.8824969\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#The next failure event is now determined using the conditional reliability equation, or: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(x+v)= &amp;amp; R(x,v)\cdot R(v) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot R(375) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot 0.7948200 \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.7014261 \\ &lt;br /&gt;
x+375= &amp;amp; 501.024 \\ &lt;br /&gt;
x = &amp;amp; 126.024  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus, the next failure event will be at 500 + 126.024 = 626.024 hours.  Note that if the component had been as good as new (i.e., RF = 100%), then the next failure would have been at 750 hours (500 + 250), where 250 hours is the time corresponding to a reliability of 0.8824969, which is the random number that was generated in Step 4.&lt;br /&gt;
&lt;br /&gt;
:6. At this failure point, the item&#039;s age will now be equal to the initial age, after the first corrective action, plus the additional time it operated, or 375 + 126.024 = 501.024 hours.&lt;br /&gt;
:7. Thus, the age after the second repair will be the sum of the previous age and the restoration factor times the age of the component since the last failure, or 375 + ((1-0.25) x 126.024) = 469.518 hours.&lt;br /&gt;
:8. Go to Step 4 and repeat the process.&lt;br /&gt;
&lt;br /&gt;
===Illustrating Type II RF Through an Example===&lt;br /&gt;
Assume that you have a component with a Weibull failure distribution (&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; = 1.5, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; = 1000 hours), RF type II = 0.25 and the component undergoes instant repair.  Furthermore, assume that the component starts life new (i.e., with a start age of zero).  The simulation steps are as follows:&lt;br /&gt;
&lt;br /&gt;
#Generate a uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.7021885\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Then, the first failure event will be at 500 hours.&amp;lt;br&amp;gt;&lt;br /&gt;
#After instantaneous repair, the component will begin life with an age after repair of 375 hours ((1 - 0.25) x 500).&amp;lt;br&amp;gt;&lt;br /&gt;
#Generate another uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.8824969\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
:#The next failure event is now determined using the conditional reliability equation, or: &lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(x+v)= &amp;amp; R(x,v)\cdot R(v) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot R(375) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot 0.7948200 \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.7014261 \\ &lt;br /&gt;
x+375= &amp;amp; 501.024 \\ &lt;br /&gt;
x= &amp;amp; 126.024  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
:Thus, the next failure event will be at 500 + 126.024 = 626.024 hours.  Note that if the component had been as good as new (i.e., RF = 100%), then the next failure would have been at 750 hours (500 + 250), where 250 is the time corresponding to a reliability of 0.8824969, which is the random number that was generated in Step 4.&lt;br /&gt;
&lt;br /&gt;
:6.  At this failure point, the item&#039;s age will now be equal to the initial age, after the first corrective action, plus the additional time it operated, or 375 + 126.024 = 501.024 hours.&lt;br /&gt;
:7.  Thus, the age after the second repair will be the restoration factor times the age of the component at failure, or (1 - 0.25) x (375 + 126.024) = 375.768 hours.&lt;br /&gt;
:8.  Go to Step 4 and repeat the process.&lt;br /&gt;
&lt;br /&gt;
===Discussion of Type I and Type II RFs===&lt;br /&gt;
As an application example, consider an automotive engine that fails after six years of operation.  The engine is rebuilt.  The rebuild has the effect of rejuvenating the engine to a condition as if it were three years old (i.e., a 50% RF).  Assume that the rebuild affects all of the damage on the engine (i.e., a Type II restoration).  The engine fails again after three years (when it again reaches an age of six) and another rebuild is required.  This rebuild will also rejuvenate the engine by 50%, thus making it three years old again.&lt;br /&gt;
&lt;br /&gt;
Now consider a similar engine subjected to a similar rebuild, but that the rebuild only affects the damage since the last repair (i.e., a Type I restoration of 50%).  The first rebuild will rejuvenate the engine to a three-year-old condition.  The engine will fail again after three years, but the rebuild this time will only affect the age (of three years) after the first rebuild.  Thus the engine will have an age of four and a half years after the second rebuild ( &amp;lt;math&amp;gt;3+(1-0.5) \times 3=4.5\,\!&amp;lt;/math&amp;gt; ).  After the second rebuild the engine will fail again after a period of one and a half years and a third rebuild will be required.  The age of the engine after the third rebuild will be five years and three months ( &amp;lt;math&amp;gt;4.5+(1-0.5) \times 1.5=5.25\,\!&amp;lt;/math&amp;gt; ).&lt;br /&gt;
&lt;br /&gt;
It should be pointed out that when dealing with constant failure rates (i.e., with a distribution such as the exponential), the restoration factor has no effect.&lt;br /&gt;
&lt;br /&gt;
===Calculations to Obtain RFs===&lt;br /&gt;
The two types of restoration factors discussed in the previous sections can be calculated using the parametric RDA (Recurrent Data Analysis) tool in Weibull++.  This tool uses the GRP (General Renewal Process) model to analyze failure data of a repairable item.  More information on the Parametric RDA tool and the GRP (General Renewal Process) model can be found in [[Appendix_B:_References | [25]]].  As an example, consider the times to failure for an air-conditioning unit of an aircraft recorded in the following table.  Assume that each time the unit is repaired, the repair can only remove the damage incurred during the last period of operation.  This assumption implies a type I RF factor which is specified as an analysis setting in the Weibull++ folio.  The type I RF for the air-conditioning unit can be calculated using the results from Weibull++ shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.14.png|center|727px|thumb|Using the Parametric RDA tool in Weibull++ to calculate restoration factors.|link=]]&lt;br /&gt;
&lt;br /&gt;
[[Image:8.14t.png|center|300px|link=]]&lt;br /&gt;
 &lt;br /&gt;
The value of the action effectiveness factor &amp;lt;math&amp;gt;q\,\!&amp;lt;/math&amp;gt; obtained from Weibull++ is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
q=0.1344 &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The type I RF factor is calculated using &amp;lt;math&amp;gt;q\,\!&amp;lt;/math&amp;gt; as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
RF= &amp;amp; 1-q \\ &lt;br /&gt;
= &amp;amp; 1-0.1344 \\ &lt;br /&gt;
= &amp;amp; 0.8656  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
The parameters of the Weibull distribution for the air-conditioning unit can also be calculated. &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is obtained from Weibull++ as 1.1976. &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; can be calculated using the &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values from Weibull++ as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\eta = &amp;amp; {{\left( \frac{1}{\lambda } \right)}^{\tfrac{1}{\beta }}} \\ &lt;br /&gt;
= &amp;amp; {{\left( \frac{1}{0.0049} \right)}^{\tfrac{1}{1.1976}}} \\ &lt;br /&gt;
= &amp;amp; 84.8582  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The values of the type I RF, &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; calculated above can now be used to model the air-conditioning unit as a component in BlockSim.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Imperfect_Repairs&amp;diff=64914</id>
		<title>Imperfect Repairs</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Imperfect_Repairs&amp;diff=64914"/>
		<updated>2017-02-08T19:41:40Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Illustrating Type II RF Through an Example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner BlockSim Articles}}{{Navigation box}}&lt;br /&gt;
&#039;&#039;This article also appears in the [[Repairable_Systems_Analysis_Through_Simulation#Imperfect_Repairs|System Analysis Reference]] book.&#039;&#039;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
===Restoration Factors (RF)===&lt;br /&gt;
&amp;lt;includeonly&amp;gt;In the prior discussion it was assumed that a repaired component is as good as new after repair.  This is usually the case when replacing a component with a new one.  &amp;lt;/includeonly&amp;gt;The concept of a restoration factor may be used in cases in which one wants to model imperfect repair, or a repair with a used component.  The best way to indicate that a component is not as good as new is to give the component some age.  As an example, if one is dealing with car tires, a tire that is not as good as new would have some pre-existing wear on it.  In other words, the tire would have some accumulated mileage.  A restoration factor concept is used to better describe the existing age of a component.  The restoration factor is used to determine the age of the component after a repair or any other maintenance action (addressed in later sections, such as a PM action or inspection).&lt;br /&gt;
 &lt;br /&gt;
The restoration factor in BlockSim is defined as a number between 0 and 1 and has the following effect:&lt;br /&gt;
&lt;br /&gt;
::#A restoration factor of 1 (100%) implies that the component is as good as new after repair, which in effect implies that the starting age of the component is 0.&lt;br /&gt;
::#A restoration factor of 0 implies that the component is the same as it was prior to repair, which in effect implies that the starting age of the component is the same as the age of the component at failure.&lt;br /&gt;
::#A restoration factor of 0.25 (25%) implies that the starting age of the component is equal to 75% of the age of the component at failure.&amp;lt;br&amp;gt;&lt;br /&gt;
The figure below provides a visual demonstration of restoration factors.  It should be noted that for successive maintenance actions on the same component, the age of the component after such an action is the initial age plus the time to failure since the last maintenance action.  &lt;br /&gt;
&lt;br /&gt;
[[Image:r5_new.png|center|300px|Different restoration factors(RF).|link=]]&lt;br /&gt;
&lt;br /&gt;
===Type I and Type II RFs===&lt;br /&gt;
&lt;br /&gt;
BlockSim offers two kinds of restoration factors.  The type I restoration factor is based on Kijima [[Appendix_B:_References | [12, 13]]] model I and assumes that the repairs can only fix the wear-out and damage incurred during the last period of operation.  Thus, the nth repair can only remove the damage incurred during the time between the (n-1)th and nth failures.  The type II restoration factor, based on Kijima model II, assumes that the repairs fix all of the wear-out and damage accumulated up to the current time.  As a result, the nth repair not only removes the damage incurred during the time between the (n-1)th and nth failures, but can also fix the cumulative damage incurred during the time from the first failure to the (n-1)th failure.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.13.png|center|500px|A Repairable System Structure|link=]]&lt;br /&gt;
&lt;br /&gt;
To illustrate this, consider a repairable system, observed from time &amp;lt;math&amp;gt;t=0\,\!&amp;lt;/math&amp;gt;, as shown in figure above.  Let the successive failure times be denoted by &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{t}_{2}}\,\!&amp;lt;/math&amp;gt;, ... and let the times between failures be denoted by &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, ....  Let &amp;lt;math&amp;gt;RF\,\!&amp;lt;/math&amp;gt; denote the restoration factor, then the age of the system &amp;lt;math&amp;gt;{{v}_{n}}\,\!&amp;lt;/math&amp;gt; at time &amp;lt;math&amp;gt;{{t}_{n}}\,\!&amp;lt;/math&amp;gt; using the two types of restoration factors is:&lt;br /&gt;
&lt;br /&gt;
Type I Restoration Factor:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{v}_{n}}={{v}_{n-1}}+(1-RF){{x}_{n}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type II Restoration Factor:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{v}_{n}}=(1-RF)({{v}_{n-1}}+{{x}_{n}}) &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Illustrating Type I RF Through an Example===&lt;br /&gt;
&lt;br /&gt;
Assume that you have a component with a Weibull failure distribution (&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; = 1.5, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; = 1000 hours), RF type I = 0.25 and the component undergoes instant repair.  Furthermore, assume that the component starts life new (i.e., with a start age of zero).  The simulation steps are as follows:&lt;br /&gt;
&lt;br /&gt;
#Generate a uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.7021885\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Then, the first failure event will be at 500 hours.  &amp;lt;br&amp;gt;&lt;br /&gt;
#After instantaneous repair, the component will begin life with an age after repair of 375 hours &amp;lt;math&amp;gt;((1-0.25) \times 500)\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Generate another uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.8824969\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#The next failure event is now determined using the conditional reliability equation, or: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(x+v)= &amp;amp; R(x,v)\cdot R(v) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot R(375) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot 0.7948200 \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.7014261 \\ &lt;br /&gt;
x+375= &amp;amp; 501.024 \\ &lt;br /&gt;
x = &amp;amp; 126.024  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus, the next failure event will be at 500 + 126.024 = 626.024 hours.  Note that if the component had been as good as new (i.e., RF = 100%), then the next failure would have been at 750 hours (500 + 250), where 250 hours is the time corresponding to a reliability of 0.8824969, which is the random number that was generated in Step 4.&lt;br /&gt;
&lt;br /&gt;
:6. At this failure point, the item&#039;s age will now be equal to the initial age, after the first corrective action, plus the additional time it operated, or 375 + 126.024 = 501.024 hours.&lt;br /&gt;
:7. Thus, the age after the second repair will be the sum of the previous age and the restoration factor times the age of the component since the last failure, or 375 + ((1-0.25) x 126.024) = 469.518 hours.&lt;br /&gt;
:8. Go to Step 4 and repeat the process.&lt;br /&gt;
&lt;br /&gt;
===Illustrating Type II RF Through an Example===&lt;br /&gt;
Assume that you have a component with a Weibull failure distribution (&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; = 1.5, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; = 1000 hours), RF type II = 0.25 and the component undergoes instant repair.  Furthermore, assume that the component starts life new (i.e., with a start age of zero).  The simulation steps are as follows:&lt;br /&gt;
&lt;br /&gt;
#Generate a uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.7021885\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Then, the first failure event will be at 500 hours.  &amp;lt;br&amp;gt;&lt;br /&gt;
#After instantaneous repair, the component will begin life with an age after repair of 375 hrs &amp;lt;math&amp;gt;((1-0.25) \times 500)\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Generate another uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.8824969\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
:#The next failure event is now determined using the conditional reliability equation, or: &lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(x+v)= &amp;amp; R(x,v)\cdot R(v) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot R(375) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot 0.7948200 \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.7014261 \\ &lt;br /&gt;
x+375= &amp;amp; 501.024 \\ &lt;br /&gt;
x= &amp;amp; 126.024  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
:Thus, the next failure event will be at 500 + 126.024 = 626.024 hours.  Note that if the component had been as good as new (i.e., RF = 100%), then the next failure would have been at 750 hours (500 + 250), where 250 is the time corresponding to a reliability of 0.8824969, which is the random number that was generated in Step 4.&lt;br /&gt;
&lt;br /&gt;
:6.  At this failure point, the item&#039;s age will now be equal to the initial age, after the first corrective action, plus the additional time it operated, or 375 + 126.024 = 501.024 hours.&lt;br /&gt;
:7.  Thus, the age after the second repair will be the restoration factor times the age of the component at failure, or (1 - 0.25) x (375 + 126.024) = 375.768 hours.&lt;br /&gt;
:8.  Go to Step 4 and repeat the process.&lt;br /&gt;
&lt;br /&gt;
===Discussion of Type I and Type II RFs===&lt;br /&gt;
As an application example, consider an automotive engine that fails after six years of operation.  The engine is rebuilt.  The rebuild has the effect of rejuvenating the engine to a condition as if it were three years old (i.e., a 50% RF).  Assume that the rebuild affects all of the damage on the engine (i.e., a Type II restoration).  The engine fails again after three years (when it again reaches an age of six) and another rebuild is required.  This rebuild will also rejuvenate the engine by 50%, thus making it three years old again.&lt;br /&gt;
&lt;br /&gt;
Now consider a similar engine subjected to a similar rebuild, but that the rebuild only affects the damage since the last repair (i.e., a Type I restoration of 50%).  The first rebuild will rejuvenate the engine to a three-year-old condition.  The engine will fail again after three years, but the rebuild this time will only affect the age (of three years) after the first rebuild.  Thus the engine will have an age of four and a half years after the second rebuild ( &amp;lt;math&amp;gt;3+(1-0.5) \times 3=4.5\,\!&amp;lt;/math&amp;gt; ).  After the second rebuild the engine will fail again after a period of one and a half years and a third rebuild will be required.  The age of the engine after the third rebuild will be five years and three months ( &amp;lt;math&amp;gt;4.5+(1-0.5) \times 1.5=5.25\,\!&amp;lt;/math&amp;gt; ).&lt;br /&gt;
&lt;br /&gt;
It should be pointed out that when dealing with constant failure rates (i.e., with a distribution such as the exponential), the restoration factor has no effect.&lt;br /&gt;
&lt;br /&gt;
===Calculations to Obtain RFs===&lt;br /&gt;
The two types of restoration factors discussed in the previous sections can be calculated using the parametric RDA (Recurrent Data Analysis) tool in Weibull++.  This tool uses the GRP (General Renewal Process) model to analyze failure data of a repairable item.  More information on the Parametric RDA tool and the GRP (General Renewal Process) model can be found in [[Appendix_B:_References | [25]]].  As an example, consider the times to failure for an air-conditioning unit of an aircraft recorded in the following table.  Assume that each time the unit is repaired, the repair can only remove the damage incurred during the last period of operation.  This assumption implies a type I RF factor which is specified as an analysis setting in the Weibull++ folio.  The type I RF for the air-conditioning unit can be calculated using the results from Weibull++ shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.14.png|center|727px|thumb|Using the Parametric RDA tool in Weibull++ to calculate restoration factors.|link=]]&lt;br /&gt;
&lt;br /&gt;
[[Image:8.14t.png|center|300px|link=]]&lt;br /&gt;
 &lt;br /&gt;
The value of the action effectiveness factor &amp;lt;math&amp;gt;q\,\!&amp;lt;/math&amp;gt; obtained from Weibull++ is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
q=0.1344 &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The type I RF factor is calculated using &amp;lt;math&amp;gt;q\,\!&amp;lt;/math&amp;gt; as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
RF= &amp;amp; 1-q \\ &lt;br /&gt;
= &amp;amp; 1-0.1344 \\ &lt;br /&gt;
= &amp;amp; 0.8656  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
The parameters of the Weibull distribution for the air-conditioning unit can also be calculated. &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is obtained from Weibull++ as 1.1976. &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; can be calculated using the &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values from Weibull++ as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\eta = &amp;amp; {{\left( \frac{1}{\lambda } \right)}^{\tfrac{1}{\beta }}} \\ &lt;br /&gt;
= &amp;amp; {{\left( \frac{1}{0.0049} \right)}^{\tfrac{1}{1.1976}}} \\ &lt;br /&gt;
= &amp;amp; 84.8582  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The values of the type I RF, &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; calculated above can now be used to model the air-conditioning unit as a component in BlockSim.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Imperfect_Repairs&amp;diff=64913</id>
		<title>Imperfect Repairs</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Imperfect_Repairs&amp;diff=64913"/>
		<updated>2017-02-08T19:38:38Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Illustrating Type I RF Through an Example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner BlockSim Articles}}{{Navigation box}}&lt;br /&gt;
&#039;&#039;This article also appears in the [[Repairable_Systems_Analysis_Through_Simulation#Imperfect_Repairs|System Analysis Reference]] book.&#039;&#039;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
===Restoration Factors (RF)===&lt;br /&gt;
&amp;lt;includeonly&amp;gt;In the prior discussion it was assumed that a repaired component is as good as new after repair.  This is usually the case when replacing a component with a new one.  &amp;lt;/includeonly&amp;gt;The concept of a restoration factor may be used in cases in which one wants to model imperfect repair, or a repair with a used component.  The best way to indicate that a component is not as good as new is to give the component some age.  As an example, if one is dealing with car tires, a tire that is not as good as new would have some pre-existing wear on it.  In other words, the tire would have some accumulated mileage.  A restoration factor concept is used to better describe the existing age of a component.  The restoration factor is used to determine the age of the component after a repair or any other maintenance action (addressed in later sections, such as a PM action or inspection).&lt;br /&gt;
 &lt;br /&gt;
The restoration factor in BlockSim is defined as a number between 0 and 1 and has the following effect:&lt;br /&gt;
&lt;br /&gt;
::#A restoration factor of 1 (100%) implies that the component is as good as new after repair, which in effect implies that the starting age of the component is 0.&lt;br /&gt;
::#A restoration factor of 0 implies that the component is the same as it was prior to repair, which in effect implies that the starting age of the component is the same as the age of the component at failure.&lt;br /&gt;
::#A restoration factor of 0.25 (25%) implies that the starting age of the component is equal to 75% of the age of the component at failure.&amp;lt;br&amp;gt;&lt;br /&gt;
The figure below provides a visual demonstration of restoration factors.  It should be noted that for successive maintenance actions on the same component, the age of the component after such an action is the initial age plus the time to failure since the last maintenance action.  &lt;br /&gt;
&lt;br /&gt;
[[Image:r5_new.png|center|300px|Different restoration factors(RF).|link=]]&lt;br /&gt;
&lt;br /&gt;
===Type I and Type II RFs===&lt;br /&gt;
&lt;br /&gt;
BlockSim offers two kinds of restoration factors.  The type I restoration factor is based on Kijima [[Appendix_B:_References | [12, 13]]] model I and assumes that the repairs can only fix the wear-out and damage incurred during the last period of operation.  Thus, the nth repair can only remove the damage incurred during the time between the (n-1)th and nth failures.  The type II restoration factor, based on Kijima model II, assumes that the repairs fix all of the wear-out and damage accumulated up to the current time.  As a result, the nth repair not only removes the damage incurred during the time between the (n-1)th and nth failures, but can also fix the cumulative damage incurred during the time from the first failure to the (n-1)th failure.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.13.png|center|500px|A Repairable System Structure|link=]]&lt;br /&gt;
&lt;br /&gt;
To illustrate this, consider a repairable system, observed from time &amp;lt;math&amp;gt;t=0\,\!&amp;lt;/math&amp;gt;, as shown in figure above.  Let the successive failure times be denoted by &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{t}_{2}}\,\!&amp;lt;/math&amp;gt;, ... and let the times between failures be denoted by &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt;, ....  Let &amp;lt;math&amp;gt;RF\,\!&amp;lt;/math&amp;gt; denote the restoration factor, then the age of the system &amp;lt;math&amp;gt;{{v}_{n}}\,\!&amp;lt;/math&amp;gt; at time &amp;lt;math&amp;gt;{{t}_{n}}\,\!&amp;lt;/math&amp;gt; using the two types of restoration factors is:&lt;br /&gt;
&lt;br /&gt;
Type I Restoration Factor:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{v}_{n}}={{v}_{n-1}}+(1-RF){{x}_{n}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type II Restoration Factor:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{v}_{n}}=(1-RF)({{v}_{n-1}}+{{x}_{n}}) &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Illustrating Type I RF Through an Example===&lt;br /&gt;
&lt;br /&gt;
Assume that you have a component with a Weibull failure distribution (&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; = 1.5, &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; = 1000 hours), RF type I = 0.25 and the component undergoes instant repair.  Furthermore, assume that the component starts life new (i.e., with a start age of zero).  The simulation steps are as follows:&lt;br /&gt;
&lt;br /&gt;
#Generate a uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.7021885\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Then, the first failure event will be at 500 hours.  &amp;lt;br&amp;gt;&lt;br /&gt;
#After instantaneous repair, the component will begin life with an age after repair of 375 hours &amp;lt;math&amp;gt;((1-0.25) \times 500)\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Generate another uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.8824969\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#The next failure event is now determined using the conditional reliability equation, or: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(x+v)= &amp;amp; R(x,v)\cdot R(v) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot R(375) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot 0.7948200 \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.7014261 \\ &lt;br /&gt;
x+375= &amp;amp; 501.024 \\ &lt;br /&gt;
x = &amp;amp; 126.024  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Thus, the next failure event will be at 500 + 126.024 = 626.024 hours.  Note that if the component had been as good as new (i.e., RF = 100%), then the next failure would have been at 750 hours (500 + 250), where 250 hours is the time corresponding to a reliability of 0.8824969, which is the random number that was generated in Step 4.&lt;br /&gt;
&lt;br /&gt;
:6. At this failure point, the item&#039;s age will now be equal to the initial age, after the first corrective action, plus the additional time it operated, or 375 + 126.024 = 501.024 hours.&lt;br /&gt;
:7. Thus, the age after the second repair will be the sum of the previous age and the restoration factor times the age of the component since the last failure, or 375 + ((1-0.25) x 126.024) = 469.518 hours.&lt;br /&gt;
:8. Go to Step 4 and repeat the process.&lt;br /&gt;
&lt;br /&gt;
===Illustrating Type II RF Through an Example===&lt;br /&gt;
Assume that you have a component with a Weibull failure distribution ( &amp;lt;math&amp;gt;\beta =1.5\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\eta =1000\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;hours\,\!&amp;lt;/math&amp;gt; ), RF type II = 0.25 and the component undergoes instant repair.  Furthermore, assume that the component starts life new (i.e., with a start age of zero).  The simulation steps are as follows:&lt;br /&gt;
&lt;br /&gt;
#Generate a uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.7021885\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Then, the first failure event will be at 500 hours.  &amp;lt;br&amp;gt;&lt;br /&gt;
#After instantaneous repair, the component will begin life with an age after repair of 375 hrs &amp;lt;math&amp;gt;((1-0.25) \times 500)\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
#Generate another uniform random number, &amp;lt;math&amp;gt;{{U}_{R}}[0,1] = 0.8824969\,\!&amp;lt;/math&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
:#The next failure event is now determined using the conditional reliability equation, or: &lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(x+v)= &amp;amp; R(x,v)\cdot R(v) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot R(375) \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.8824969\cdot 0.7948200 \\ &lt;br /&gt;
R(x+375)= &amp;amp; 0.7014261 \\ &lt;br /&gt;
x+375= &amp;amp; 501.024 \\ &lt;br /&gt;
x= &amp;amp; 126.024  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
:Thus, the next failure event will be at &amp;lt;math&amp;gt;500+126.024=626.024\,\!&amp;lt;/math&amp;gt; hours.  Note that if the component had been as good as new (i.e., RF = 100%), then the next failure would have been at 750 hrs (500 + 250), where 250 is the time corresponding to a reliability of 0.8824969, which is the random number that was generated in Step 4.&lt;br /&gt;
&lt;br /&gt;
:6.  At this failure point, the item&#039;s age will now be equal to the initial age, after the first corrective action, plus the additional time it operated, or &amp;lt;math&amp;gt;375+126.024=501.024\,\!&amp;lt;/math&amp;gt; hours.&lt;br /&gt;
:7.  Thus, the age after the second repair will be the restoration factor times the age of the component at failure, or &amp;lt;math&amp;gt;(1-0.25) \times (375+126.024)=375.768\,\!&amp;lt;/math&amp;gt; hours.&lt;br /&gt;
:8.  Go to Step 4 and repeat the process.&lt;br /&gt;
&lt;br /&gt;
===Discussion of Type I and Type II RFs===&lt;br /&gt;
As an application example, consider an automotive engine that fails after six years of operation.  The engine is rebuilt.  The rebuild has the effect of rejuvenating the engine to a condition as if it were three years old (i.e., a 50% RF).  Assume that the rebuild affects all of the damage on the engine (i.e., a Type II restoration).  The engine fails again after three years (when it again reaches an age of six) and another rebuild is required.  This rebuild will also rejuvenate the engine by 50%, thus making it three years old again.&lt;br /&gt;
&lt;br /&gt;
Now consider a similar engine subjected to a similar rebuild, but that the rebuild only affects the damage since the last repair (i.e., a Type I restoration of 50%).  The first rebuild will rejuvenate the engine to a three-year-old condition.  The engine will fail again after three years, but the rebuild this time will only affect the age (of three years) after the first rebuild.  Thus the engine will have an age of four and a half years after the second rebuild ( &amp;lt;math&amp;gt;3+(1-0.5) \times 3=4.5\,\!&amp;lt;/math&amp;gt; ).  After the second rebuild the engine will fail again after a period of one and a half years and a third rebuild will be required.  The age of the engine after the third rebuild will be five years and three months ( &amp;lt;math&amp;gt;4.5+(1-0.5) \times 1.5=5.25\,\!&amp;lt;/math&amp;gt; ).&lt;br /&gt;
&lt;br /&gt;
It should be pointed out that when dealing with constant failure rates (i.e., with a distribution such as the exponential), the restoration factor has no effect.&lt;br /&gt;
&lt;br /&gt;
===Calculations to Obtain RFs===&lt;br /&gt;
The two types of restoration factors discussed in the previous sections can be calculated using the parametric RDA (Recurrent Data Analysis) tool in Weibull++.  This tool uses the GRP (General Renewal Process) model to analyze failure data of a repairable item.  More information on the Parametric RDA tool and the GRP (General Renewal Process) model can be found in [[Appendix_B:_References | [25]]].  As an example, consider the times to failure for an air-conditioning unit of an aircraft recorded in the following table.  Assume that each time the unit is repaired, the repair can only remove the damage incurred during the last period of operation.  This assumption implies a type I RF factor which is specified as an analysis setting in the Weibull++ folio.  The type I RF for the air-conditioning unit can be calculated using the results from Weibull++ shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.14.png|center|727px|thumb|Using the Parametric RDA tool in Weibull++ to calculate restoration factors.|link=]]&lt;br /&gt;
&lt;br /&gt;
[[Image:8.14t.png|center|300px|link=]]&lt;br /&gt;
 &lt;br /&gt;
The value of the action effectiveness factor &amp;lt;math&amp;gt;q\,\!&amp;lt;/math&amp;gt; obtained from Weibull++ is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
q=0.1344 &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The type I RF factor is calculated using &amp;lt;math&amp;gt;q\,\!&amp;lt;/math&amp;gt; as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
RF= &amp;amp; 1-q \\ &lt;br /&gt;
= &amp;amp; 1-0.1344 \\ &lt;br /&gt;
= &amp;amp; 0.8656  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
The parameters of the Weibull distribution for the air-conditioning unit can also be calculated. &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is obtained from Weibull++ as 1.1976. &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; can be calculated using the &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; values from Weibull++ as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\eta = &amp;amp; {{\left( \frac{1}{\lambda } \right)}^{\tfrac{1}{\beta }}} \\ &lt;br /&gt;
= &amp;amp; {{\left( \frac{1}{0.0049} \right)}^{\tfrac{1}{1.1976}}} \\ &lt;br /&gt;
= &amp;amp; 84.8582  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The values of the type I RF, &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; calculated above can now be used to model the air-conditioning unit as a component in BlockSim.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Repairable_Systems_Analysis_Through_Simulation&amp;diff=64912</id>
		<title>Repairable Systems Analysis Through Simulation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Repairable_Systems_Analysis_Through_Simulation&amp;diff=64912"/>
		<updated>2017-02-08T19:11:59Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Probabilistic View, Simple Series */ changed R(T,t) to R(t|T)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:bsbook|7}}&lt;br /&gt;
{{TU}}&lt;br /&gt;
&lt;br /&gt;
Having introduced some of the basic theory and terminology for repairable systems in [[Introduction to Repairable Systems]], we will now examine the steps involved in the analysis of such complex systems.  We will begin by examining system behavior through a sequence of discrete deterministic events and expand the analysis using discrete event simulation.&lt;br /&gt;
&lt;br /&gt;
=Simple Repairs=&lt;br /&gt;
==Deterministic View, Simple Series==&lt;br /&gt;
To first understand how component failures and simple repairs affect the system and to visualize the steps involved, let&#039;s begin with a very simple deterministic example with two components, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, in series.&lt;br /&gt;
&lt;br /&gt;
[[Image:i8.1.png|center|200px|link=]]&lt;br /&gt;
&lt;br /&gt;
Component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; fails every 100 hours and component &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails every 120 hours.  Both require 10 hours to get repaired.  Furthermore, assume that the surviving component stops operating when the system fails (thus not aging).  &lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: When a failure occurs in certain systems, some or all of the system&#039;s components&lt;br /&gt;
may or may not continue to accumulate operating time while the system is down. For example,&lt;br /&gt;
consider a transmitter-satellite-receiver system. This is a series system and the probability&lt;br /&gt;
of failure for this system is the probability that any of the subsystems fail. If the receiver&lt;br /&gt;
fails, the satellite continues to operate even though the receiver is down. In this case, the&lt;br /&gt;
continued aging of the components during the system inoperation &#039;&#039;&#039;must&#039;&#039;&#039; be taken into&lt;br /&gt;
consideration, since this will affect their failure characteristics and have an impact on the&lt;br /&gt;
overall system downtime and availability.&lt;br /&gt;
&lt;br /&gt;
The system behavior during an operation from 0 to 300 hours would be as shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS8.1.png|center|500px|Overview of system and components for a simple series system with two components. Component A fails every 100 hours and component B fails every 120 hours. Both require 10 hours to get repaired and do not age(operate through failure) when the system is in a failed state.|link=]]&lt;br /&gt;
&lt;br /&gt;
Specifically, component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would fail at 100 hours, causing the system to fail.  After 10 hours, component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would be restored and so would the system.  The next event would be the failure of component &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;.  We know that component &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails every 120 hours (or after an age of 120 hours).  Since a component does not age while the system is down, component &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; would have reached an age of 120 when the clock reaches 130 hours.  Thus, component &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; would fail at 130 hours and be repaired by 140 and so forth.  Overall in this scenario, the system would be failed for a total of 40 hours due to four downing events (two due to &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and two due to &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; ).  The overall system availability (average or mean availability) would be &amp;lt;math&amp;gt;260/300=0.86667\,\!&amp;lt;/math&amp;gt;.  Point availability is the availability at a specific point time.  In this deterministic case, the point availability would always be equal to 1 if the system is up at that time and equal to zero if the system is down at that time.&lt;br /&gt;
&lt;br /&gt;
====Operating Through System Failure====&lt;br /&gt;
&lt;br /&gt;
In the prior section we made the assumption that components do not age when the system is down.  This assumption applies to most systems.  However, under special circumstances, a unit may age even while the system is down.  In such cases, the operating profile will be different from the one presented in the prior section.  The figure below illustrates the case where the components operate continuously, regardless of the system status.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS8.2.png|center|500px|Overview of up and down states for a simple series system with two components. Component &#039;&#039;A&#039;&#039; failes every 100 hours and component &#039;&#039;B&#039;&#039; fails every 120 hours. Both require 10 hours to get repaired and age when the system is in a failed state(operate through failure).|link=]]&lt;br /&gt;
&lt;br /&gt;
====Effects of Operating Through Failure====&lt;br /&gt;
&lt;br /&gt;
Consider a component with an increasing failure rate, as shown in the figure below.  In the case that the component continues to operate through system failure, then when the system fails at &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; the surviving component&#039;s failure rate will be &amp;lt;math&amp;gt;{{\lambda }_{1}}\,\!&amp;lt;/math&amp;gt;, as illustrated in figure below.  When the system is restored at &amp;lt;math&amp;gt;{{t}_{2}}\,\!&amp;lt;/math&amp;gt;, the component would have aged by &amp;lt;math&amp;gt;{{t}_{2}}-{{t}_{1}}\,\!&amp;lt;/math&amp;gt; and its failure rate would now be &amp;lt;math&amp;gt;{{\lambda }_{2}}\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
In the case of a component that does not operate through failure, then the surviving component would be at the same failure rate, &amp;lt;math&amp;gt;{{\lambda }_{1}},\,\!&amp;lt;/math&amp;gt; when the system resumes operation.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS8.3.png|center|400px|Illustration of a component with a linearly increasing failure rate and the effect of operation through system failure.|link=]]&lt;br /&gt;
&lt;br /&gt;
==Deterministic View, Simple Parallel==&lt;br /&gt;
Consider the following system where &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; fails every 100, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; every 120, &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; every 140 and &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; every 160 time units.  Each takes 10 time units to restore.  Furthermore, assume that components do not age when the system is down.&lt;br /&gt;
&lt;br /&gt;
[[Image:i8.2.png|center|300px|link=]]&lt;br /&gt;
&lt;br /&gt;
A deterministic system view is shown in the figure below.  The sequence of events is as follows:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#At 100, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; fails and is repaired by 110.  The system is failed.  &lt;br /&gt;
#At 130, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails and is repaired by 140.  The system continues to operate.&lt;br /&gt;
#At 150, &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; fails and is repaired by 160.  The system continues to operate.&lt;br /&gt;
#At 170, &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; fails and is repaired by 180.  The system is failed.  &lt;br /&gt;
#At 220, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; fails and is repaired by 230.  The system is failed.  &lt;br /&gt;
#At 280, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails and is repaired by 290.  The system continues to operate.&lt;br /&gt;
#End at 300.&lt;br /&gt;
 &lt;br /&gt;
[[Image:BS8.4.png|center|500px|Overview of simple redundant system with four components.|link=]]&lt;br /&gt;
&lt;br /&gt;
====Additional Notes====&lt;br /&gt;
&lt;br /&gt;
It should be noted that we are dealing with these events deterministically in order to better illustrate the methodology.  When dealing with deterministic events, it is possible to create a sequence of events that one would not expect to encounter probabilistically.  One such example consists of two units in series that do not operate through failure but both fail at exactly 100, which is highly unlikely in a real-world scenario.  In this case, the assumption is that one of the events must occur at least an infinitesimal amount of time ( &amp;lt;math&amp;gt;dt)\,\!&amp;lt;/math&amp;gt; before the other.  Probabilistically, this event is extremely rare, since both randomly generated times would have to be exactly equal to each other, to 15 decimal points.  In the rare event that this happens, BlockSim would pick the unit with the lowest ID value as the first failure.  BlockSim assigns a unique numerical ID when each component is created.  These can be viewed by selecting the &#039;&#039;&#039;Show Block ID&#039;&#039;&#039; option in the Diagram Options window.&lt;br /&gt;
&lt;br /&gt;
==Deterministic Views of More Complex Systems==&lt;br /&gt;
&lt;br /&gt;
Even though the examples presented are fairly simplistic, the same approach can be repeated for larger and more complex systems.  The reader can easily observe/visualize the behavior of more complex systems in BlockSim using the Up/Down plots.  These are the same plots used in this chapter.  It should be noted that BlockSim makes these plots available only when a single simulation run has been performed for the analysis (i.e., Number of Simulations = 1).  These plots are meaningless when doing multiple simulations because each run will yield a different plot.&lt;br /&gt;
&lt;br /&gt;
==Probabilistic View, Simple Series==&lt;br /&gt;
&lt;br /&gt;
In a probabilistic case, the failures and repairs do not happen at a fixed time and for a fixed duration, but rather occur randomly and based on an underlying distribution, as shown in the following figures.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.5.png|center|600px| A single component with a probabilistic failure time and repair duration.|link=]]&lt;br /&gt;
[[Image:BS8.6.png|center|500px|A system up/down plot illustrating a probabilistic failure time and repair duration for component B.|link=]]&lt;br /&gt;
 &lt;br /&gt;
We use discrete event simulation in order to analyze (understand) the system behavior.  Discrete event simulation looks at each system/component event very similarly to the way we looked at these events in the deterministic example.  However, instead of using deterministic (fixed) times for each event occurrence or duration, random times are used.  These random times are obtained from the underlying distribution for each event.  As an example, consider an event following a 2-parameter Weibull distribution.  The &#039;&#039;cdf&#039;&#039; of the 2-parameter Weibull distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;F(T)=1-{{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Weibull reliability function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(T)= &amp;amp; 1-F(t) \\ &lt;br /&gt;
= &amp;amp; {{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, to generate a random time from a Weibull distribution with a given &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;, a uniform random number from 0 to 1, &amp;lt;math&amp;gt;{{U}_{R}}[0,1]\,\!&amp;lt;/math&amp;gt;, is first obtained.   The random time from a Weibull distribution is then obtained from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{T}_{R}}=\eta \cdot {{\left\{ -\ln \left[ {{U}_{R}}[0,1] \right] \right\}}^{\tfrac{1}{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To obtain a conditional time, the Weibull conditional reliability function is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t|T)=\frac{R(T+t)}{R(T)}=\frac{{{e}^{-{{\left( \tfrac{T+t}{\eta } \right)}^{\beta }}}}}{{{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t|T)={{e}^{-\left[ {{\left( \tfrac{T+t}{\eta } \right)}^{\beta }}-{{\left( \tfrac{T}{\eta } \right)}^{\beta }} \right]}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The random time would be the solution for &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;R(t|T)={{U}_{R}}[0,1]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To illustrate the sequence of events, assume a single block with a failure and a repair distribution.  The first event, &amp;lt;math&amp;gt;{{E}_{{{F}_{1}}}}\,\!&amp;lt;/math&amp;gt;, would be the failure of the component.  Its first time-to-failure would be a random number drawn from its failure distribution, &amp;lt;math&amp;gt;{{T}_{{{F}_{1}}}}\,\!&amp;lt;/math&amp;gt;.  Thus, the first failure event, &amp;lt;math&amp;gt;{{E}_{{{F}_{1}}}}\,\!&amp;lt;/math&amp;gt;, would be at &amp;lt;math&amp;gt;{{T}_{{{F}_{1}}}}\,\!&amp;lt;/math&amp;gt;.  Once failed, the next event would be the repair of the component, &amp;lt;math&amp;gt;{{E}_{{{R}_{1}}}}\,\!&amp;lt;/math&amp;gt;.  The time to repair the component would now be drawn from its repair distribution, &amp;lt;math&amp;gt;{{T}_{{{R}_{1}}}}\,\!&amp;lt;/math&amp;gt;.  The component would be restored by time &amp;lt;math&amp;gt;{{T}_{{{F}_{1}}}}+{{T}_{{{R}_{1}}}}\,\!&amp;lt;/math&amp;gt;.  The next event would now be the second failure of the component after the repair, &amp;lt;math&amp;gt;{{E}_{{{F}_{2}}}}\,\!&amp;lt;/math&amp;gt;.  This event would occur after a component operating time of &amp;lt;math&amp;gt;{{T}_{{{F}_{2}}}}\,\!&amp;lt;/math&amp;gt; after the item is restored (again drawn from the failure distribution), or at &amp;lt;math&amp;gt;{{T}_{{{F}_{1}}}}+{{T}_{{{R}_{1}}}}+{{T}_{{{F}_{2}}}}\,\!&amp;lt;/math&amp;gt;.  This process is repeated until the end time.  It is important to note that each run will yield a different sequence of events due to the probabilistic nature of the times.  To arrive at the desired result, this process is repeated many times and the results from each run (simulation) are recorded.  In other words, if we were to repeat this 1,000 times, we would obtain 1,000 different values for &amp;lt;math&amp;gt;{{E}_{{{F}_{1}}}}\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\left[ {{E}_{{{F}_{{{1}_{1}}}}}},{{E}_{{{F}_{{{1}_{2}}}}}},...,{{E}_{{{F}_{{{1}_{1,000}}}}}} \right]\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
The average of these values, &amp;lt;math&amp;gt;\left( \tfrac{1}{1000}\underset{i=1}{\overset{1,000}{\mathop{\sum }}}\,{{E}_{{{F}_{{{1}_{i}}}}}} \right)\,\!&amp;lt;/math&amp;gt;, would then be the average time to the first event, &amp;lt;math&amp;gt;{{E}_{{{F}_{1}}}}\,\!&amp;lt;/math&amp;gt;, or the mean time to first failure (MTTFF) for the component.  Obviously, if the component were to be 100% renewed after each repair, then this value would also be the same for the second failure, etc.&lt;br /&gt;
&lt;br /&gt;
=General Simulation Results=&lt;br /&gt;
To further illustrate this, assume that components A and B in the prior example had normal failure and repair distributions with their means equal to the deterministic values used in the prior example and standard deviations of 10 and 1 respectively.  That is, &amp;lt;math&amp;gt;{{F}_{A}}\tilde{\ }N(100,10),\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{F}_{B}}\tilde{\ }N(120,10),\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{R}_{A}}={{R}_{B}}\tilde{\ }N(10,1)\,\!&amp;lt;/math&amp;gt;.  The settings for components C and D are not changed. Obviously, given the probabilistic nature of the example, the times to each event will vary.  If one were to repeat this &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; number of times, one would arrive at the results of interest for the system and its components.  Some of the results for this system and this example, over 1,000 simulations, are provided in the figure below and explained in the next sections. &lt;br /&gt;
[[Image:r2.png|center|600px|Summary of system results for 1,000 simulations.|link=]]&lt;br /&gt;
&lt;br /&gt;
The simulation settings are shown in the figure below.&lt;br /&gt;
[[Image:8.7.gif|center|600px|BlockSim simulation window.|link=]]&lt;br /&gt;
&lt;br /&gt;
===General===&lt;br /&gt;
====Mean Availability (All Events), &amp;lt;math&amp;gt;{{\overline{A}}_{ALL}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
This is the mean availability due to all downing events, which can be thought of as the operational availability.  It is the ratio of the system uptime divided by the total simulation time (total time).  For this example: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{\overline{A}}_{ALL}}= &amp;amp; \frac{Uptime}{TotalTime} \\ &lt;br /&gt;
= &amp;amp; \frac{269.137}{300} \\ &lt;br /&gt;
= &amp;amp; 0.8971  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Std Deviation (Mean Availability)====&lt;br /&gt;
This is the standard deviation of the mean availability of all downing events for the system during the simulation.&lt;br /&gt;
&lt;br /&gt;
====Mean Availability (w/o PM, OC &amp;amp; Inspection), &amp;lt;math&amp;gt;{{\overline{A}}_{CM}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
This is the mean availability due to failure events only and it is 0.971 for this example.  Note that for this case, the mean availability without preventive maintenance, on condition maintenance and inspection is identical to the mean availability for all events.  This is because no preventive maintenance actions or inspections were defined for this system.  We will discuss the inclusion of these actions in later sections.&lt;br /&gt;
&lt;br /&gt;
Downtimes caused by PM and inspections are not included.  However, if the PM or inspection action results in the discovery of a failure, then these times are included.  As an example, consider a component that has failed but its failure is not discovered until the component is inspected.  Then the downtime from the time failed to the time restored after the inspection is counted as failure downtime, since the original event that caused this was the component&#039;s failure.  &lt;br /&gt;
====Point Availability (All Events), &amp;lt;math&amp;gt;A\left( t \right)\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
This is the probability that the system is up at time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;.  As an example, to obtain this value at &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; = 300, a special counter would need to be used during the simulation.  This counter is increased by one every time the system is up at 300 hours.  Thus, the point availability at 300 would be the times the system was up at 300 divided by the number of simulations.  For this example, this is 0.930, or 930 times out of the 1000 simulations the system was up at 300 hours.&lt;br /&gt;
&lt;br /&gt;
====Reliability (Fail Events), &amp;lt;math&amp;gt;R(t)\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
This is the probability that the system has not failed by time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;.  This is similar to point availability with the major exception that it only looks at the probability that the system did not have a single failure.  Other (non-failure) downing events are ignored.  During the simulation, a special counter again must be used.  This counter is increased by one (once in each simulation) if the system has had at least one failure up to 300 hours.  Thus, the reliability at 300 would be the number of times the system did not fail up to 300 divided by the number of simulations.  For this example, this is 0 because the system failed prior to 300 hours 1000 times out of the 1000 simulations.&lt;br /&gt;
&lt;br /&gt;
It is very important to note that this value is not always the same as the reliability computed using the analytical methods, depending on the redundancy present.  The reason that it may differ is best explained by the following scenario:&lt;br /&gt;
&lt;br /&gt;
Assume two units in parallel.  The analytical system reliability, which does not account for repairs, is the probability that both units fail.  In this case, when one unit goes down, it does not get repaired and the system fails after the second unit fails.  In the case of repairs, however, it is possible for one of the two units to fail and get repaired before the second unit fails.  Thus, when the second unit fails, the system will still be up due to the fact that the first unit was repaired.&lt;br /&gt;
&lt;br /&gt;
====Expected Number of Failures, &amp;lt;math&amp;gt;{{N}_{F}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
This is the average number of system failures.  The system failures (not downing events) for all simulations are counted and then averaged.  For this case, this is 3.188, which implies that a total of 3,188 system failure events occurred over 1000 simulations.  Thus, the expected number of system failures for one run is 3.188.  This number includes all failures, even those that may have a duration of zero.&lt;br /&gt;
&lt;br /&gt;
====Std Deviation (Number of Failures)====&lt;br /&gt;
This is the standard deviation of the number of failures for the system during the simulation.&lt;br /&gt;
&lt;br /&gt;
====MTTFF====&lt;br /&gt;
MTTFF is the mean time to first failure for the system.  This is computed by keeping track of the time at which the first system failure occurred for each simulation.  MTTFF is then the average of these times.  This may or may not be identical to the MTTF obtained in the analytical solution for the same reasons as those discussed in the Point Reliability section.  For this case, this is 100.2511.  This is fairly obvious for this case since the mean of one of the components in series was 100 hours.&lt;br /&gt;
&lt;br /&gt;
It is important to note that for each simulation run, if a first failure time is observed, then this is recorded as the system time to first failure.  If no failure is observed in the system, then the simulation end time is used as a right censored (suspended) data point.  MTTFF is then computed using the total operating time until the first failure divided by the number of observed failures (constant failure rate assumption).  Furthermore, and if the simulation end time is much less than the time to first failure for the system, it is also possible that all data points are right censored (i.e.,  no system failures were observed).  In this case, the MTTFF is again computed using a constant failure rate assumption, or:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MTTFF=\frac{2\cdot ({{T}_{S}})\cdot N}{\chi _{0.50;2}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{T}_{S}}\,\!&amp;lt;/math&amp;gt; is the simulation end time and &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the number of simulations.  One should be aware that this formulation may yield unrealistic (or erroneous) results if the system does not have a constant failure rate.   If you are trying to obtain an accurate (realistic) estimate of this value, then your simulation end time should be set to a value that is well beyond the MTTF of the system (as computed analytically).  As a general rule, the simulation end time should be at least three times larger than the MTTF of the system.&lt;br /&gt;
&lt;br /&gt;
====MTBF (Total Time)====&lt;br /&gt;
This is the mean time between failures for the system based on the total simulation time and the expected number of system failures. For this example:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
MTBF (Total Time)= &amp;amp; \frac{TotalTime}{{N}_{F}} \\ &lt;br /&gt;
= &amp;amp; \frac{300}{3.188} \\ &lt;br /&gt;
= &amp;amp; 94.102886  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====MTBF (Uptime)====&lt;br /&gt;
This is the mean time between failures for the system, considering only the time that the system was up. This is calculated by dividing system uptime by the expected number of system failures. You can also think of this as the mean uptime. For this example:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
MTBF (Uptime)= &amp;amp; \frac{Uptime}{{N}_{F}} \\ &lt;br /&gt;
= &amp;amp; \frac{269.136952}{3.188} \\ &lt;br /&gt;
= &amp;amp; 84.42188  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====MTBE (Total Time)====&lt;br /&gt;
This is the mean time between all downing events for the system, based on the total simulation time and including all system downing events. This is calculated by dividing the simulation run time by the number of downing events (&amp;lt;math&amp;gt;{{N}_{AL{{L}_{Down}}}}\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
====MTBE (Uptime)====&lt;br /&gt;
his is the mean time between all downing events for the system, considering only the time that the system was up. This is calculated by dividing system uptime by the number of downing events (&amp;lt;math&amp;gt;{{N}_{AL{{L}_{Down}}}}\,\!&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
===System Uptime/Downtime===&lt;br /&gt;
&lt;br /&gt;
====Uptime, &amp;lt;math&amp;gt;{{T}_{UP}}\,\!&amp;lt;/math&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
This is the average time the system was up and operating.  This is obtained by taking the sum of the uptimes for each simulation and dividing it by the number of simulations.  For this example, the uptime is 269.137.  To compute the Operational Availability, &amp;lt;math&amp;gt;{{A}_{o}},\,\!&amp;lt;/math&amp;gt; for this system, then:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{o}}=\frac{{{T}_{UP}}}{{{T}_{S}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====CM Downtime, &amp;lt;math&amp;gt;{{T}_{C{{M}_{Down}}}}\,\!&amp;lt;/math&amp;gt; ====&lt;br /&gt;
This is the average time the system was down for corrective maintenance actions (CM) only.  This is obtained by taking the sum of the CM downtimes for each simulation and dividing it by the number of simulations.  For this example, this is 30.863.&lt;br /&gt;
To compute the Inherent Availability, &amp;lt;math&amp;gt;{{A}_{I}},\,\!&amp;lt;/math&amp;gt; for this system over the observed time (which may or may not be steady state, depending on the length of the simulation), then:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{A}_{I}}=\frac{{{T}_{S}}-{{T}_{C{{M}_{Down}}}}}{{{T}_{S}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Inspection Downtime ====&lt;br /&gt;
&lt;br /&gt;
This is the average time the system was down due to inspections.  This is obtained by taking the sum of the inspection downtimes for each simulation and dividing it by the number of simulations.  For this example, this is zero because no inspections were defined.&lt;br /&gt;
&lt;br /&gt;
====PM Downtime, &amp;lt;math&amp;gt;{{T}_{P{{M}_{Down}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
This is the average time the system was down due to preventive maintenance (PM) actions.  This is obtained by taking the sum of the PM downtimes for each simulation and dividing it by the number of simulations.  For this example, this is zero because no PM actions were defined.&lt;br /&gt;
&lt;br /&gt;
====OC Downtime, &amp;lt;math&amp;gt;{{T}_{O{{C}_{Down}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
This is the average time the system was down due to on-condition maintenance (PM) actions.  This is obtained by taking the sum of the OC downtimes for each simulation and dividing it by the number of simulations.  For this example, this is zero because no OC actions were defined.&lt;br /&gt;
&lt;br /&gt;
====Waiting Downtime, &amp;lt;math&amp;gt;{{T}_{W{{ait}_{Down}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
This is the amount of time that the system was down due to crew and spare part wait times or crew conflict times. For this example, this is zero because no crews or spare part pools were defined.&lt;br /&gt;
&lt;br /&gt;
====Total Downtime, &amp;lt;math&amp;gt;{{T}_{Down}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
This is the downtime due to all events. In general, one may look at this as the sum of the above downtimes. However, this is not always the case. It is possible to have actions that overlap each other, depending on the options and settings for the simulation. Furthermore, there are other events that can cause the system to go down that do not get counted in any of the above categories. As an example, in the case of standby redundancy with a switch delay, if the settings are to reactivate the failed component after repair, the system may be down during the switch-back action. This downtime does not fall into any of the above categories but it is counted in the total downtime.&lt;br /&gt;
&lt;br /&gt;
For this example, this is identical to &amp;lt;math&amp;gt;{{T}_{C{{M}_{Down}}}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===System Downing Events===&lt;br /&gt;
System downing events are events associated with downtime.  Note that events with zero duration will appear in this section only if the task properties specify that the task brings the system down or if the task properties specify that the task brings the item down and the item’s failure brings the system down.&lt;br /&gt;
&lt;br /&gt;
====Number of Failures, &amp;lt;math&amp;gt;{{N}_{{{F}_{Down}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
This is the average number of system downing failures.  Unlike the Expected Number of Failures, &amp;lt;math&amp;gt;{{N}_{F}},\,\!&amp;lt;/math&amp;gt; this number does not include failures with zero duration.  For this example, this is 3.188.  &lt;br /&gt;
&lt;br /&gt;
====Number of CMs, &amp;lt;math&amp;gt;{{N}_{C{{M}_{Down}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
This is the number of corrective maintenance actions that caused the system to fail.  It is obtained by taking the sum of all CM actions that caused the system to fail divided by the number of simulations.  It does not include CM events of zero duration.  For this example, this is 3.188.  Note that this may differ from the Number of Failures, &amp;lt;math&amp;gt;{{N}_{{{F}_{Down}}}}\,\!&amp;lt;/math&amp;gt;.  An example would be a case where the system has failed, but due to other settings for the simulation, a CM is not initiated (e.g., an inspection is needed to initiate a CM).&lt;br /&gt;
&lt;br /&gt;
====Number of Inspections, &amp;lt;math&amp;gt;{{N}_{{{I}_{Down}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
This is the number of inspection actions that caused the system to fail.  It is obtained by taking the sum of all inspection actions that caused the system to fail divided by the number of simulations.  It does not include inspection events of zero duration.  For this example, this is zero.&lt;br /&gt;
&lt;br /&gt;
====Number of PMs, &amp;lt;math&amp;gt;{{N}_{P{{M}_{Down}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
This is the number of PM actions that caused the system to fail.  It is obtained by taking the sum of all PM actions that caused the system to fail divided by the number of simulations.  It does not include PM events of zero duration.  For this example, this is zero.&lt;br /&gt;
&lt;br /&gt;
====Number of OCs, &amp;lt;math&amp;gt;{{N}_{O{{C}_{Down}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
This is the number of OC actions that caused the system to fail.  It is obtained by taking the sum of all OC actions that caused the system to fail divided by the number of simulations.  It does not include OC events of zero duration.  For this example, this is zero.&lt;br /&gt;
&lt;br /&gt;
====Number of OFF Events by Trigger, &amp;lt;math&amp;gt;{{N}_{O{{FF}_{Down}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
This is the total number of events where the system is turned off by state change triggers. An OFF event is not a system failure but it may be included in system reliability calculations. For this example, this is zero.&lt;br /&gt;
&lt;br /&gt;
====Total Events, &amp;lt;math&amp;gt;{{N}_{AL{{L}_{Down}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
This is the total number of system downing events.  It also does not include events of zero duration.  It is possible that this number may differ from the sum of the other listed events.  As an example, consider the case where a failure does not get repaired until an inspection, but the inspection occurs after the simulation end time.  In this case, the number of inspections, CMs and PMs will be zero while the number of total events will be one.&lt;br /&gt;
&lt;br /&gt;
===Costs and Throughput===&lt;br /&gt;
Cost and throughput results are discussed in later sections.&lt;br /&gt;
&lt;br /&gt;
===Note About Overlapping Downing Events===&lt;br /&gt;
&lt;br /&gt;
It is important to note that two identical system downing events (that are continuous or overlapping) may be counted and viewed differently.  As shown in Case 1 of the following figure, two overlapping failure events are counted as only one event from the system perspective because the system was never restored and remained in the same down state, even though that state was caused by two different components.  Thus, the number of downing events in this case is one and the duration is as shown in CM system.  In the case that the events are different, as shown in Case 2 of the figure below, two events are counted, the CM and the PM.  However, the downtime attributed to each event is different from the actual time of each event.  In this case, the system was first down due to a CM and remained in a down state due to the CM until that action was over.  However, immediately upon completion of that action, the system remained down but now due to a PM action.  In this case, only the PM action portion that kept the system down is counted.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.9.png|center|350px|Duration and count of different overlapping events.|link=]]&lt;br /&gt;
&lt;br /&gt;
===System Point Results===&lt;br /&gt;
&lt;br /&gt;
The system point results, as shown in the figure below, shows the Point Availability (All Events), &amp;lt;math&amp;gt;A\left( t \right)\,\!&amp;lt;/math&amp;gt;, and Point Reliability, &amp;lt;math&amp;gt;R(t)\,\!&amp;lt;/math&amp;gt;, as defined in the previous section.  These are computed and returned at different points in time, based on the number of intervals selected by the user.  Additionally, this window shows &amp;lt;math&amp;gt;(1-A(t))\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;(1-R(t))\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\text{Labor Cost(t)}\,\!&amp;lt;/math&amp;gt;,&amp;lt;math&amp;gt;\text{Part Cost(t)}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Cost(t)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Mean\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A(t)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Mean\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A({{t}_{i}}-{{t}_{i-1}})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;System\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Failures(t)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\text{System Off Events by Trigger(t)}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Throughput(t)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS8.10.png|center|750px|link=]]&lt;br /&gt;
The number of intervals shown is based on the increments set. In this figure, the number of increments set was 300, which implies that the results should be shown every hour. The results shown in this figure are for 10 increments, or shown every 30 hours.&lt;br /&gt;
&lt;br /&gt;
=Results by Component=&lt;br /&gt;
Simulation results for each component can also be viewed.  The figure below shows the results for component A.  These results are explained in the sections that follow.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.11.gif|center|600px|The Block Details results for component A.|link=]]&lt;br /&gt;
&lt;br /&gt;
===General Information===&lt;br /&gt;
====Number of Block Downing Events, &amp;lt;math&amp;gt;Componen{{t}_{NDE}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
This the number of times the component went down (failed).  It includes all downing events.&lt;br /&gt;
&lt;br /&gt;
====Number of System Downing Events, &amp;lt;math&amp;gt;Componen{{t}_{NSDE}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
This is the number of times that this component&#039;s downing caused the system to be down.  For component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, this is 2.038.  Note that this value is the same in this case as the number of component failures, since the component A is reliability-wise in series with components D and components B, C.  If this were not the case (e.g., if they were in a parallel configuration, like B and C), this value would be different.&lt;br /&gt;
&lt;br /&gt;
====Number of Failures, &amp;lt;math&amp;gt;Componen{{t}_{NF}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
This is the number of times the component failed and does not include other downing events.  Note that this could also be interpreted as the number of spare parts required for CM actions for this component.  For component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, this is 2.038.&lt;br /&gt;
&lt;br /&gt;
====Number of System Downing Failures, &amp;lt;math&amp;gt;Componen{{t}_{NSDF}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
This is the number of times that this component&#039;s failure caused the system to be down.  Note that this may be different from the Number of System Downing Events.  It only counts the failure events that downed the system and does not include zero duration system failures.&lt;br /&gt;
&lt;br /&gt;
====Number of OFF events by Trigger, &amp;lt;math&amp;gt;Componen{{t}_{OFF}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
The total number of events where the block is turned off by state change triggers. An OFF event is not a failure but it may be included in system reliability calculations.&lt;br /&gt;
&lt;br /&gt;
====Mean Availability (All Events), &amp;lt;math&amp;gt;{{\overline{A}}_{AL{{L}_{Component}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
This has the same definition as for the system with the exception that this accounts only for the component.&lt;br /&gt;
&lt;br /&gt;
====Mean Availability (w/o PM, OC &amp;amp; Inspection), &amp;lt;math&amp;gt;{{\overline{A}}_{C{{M}_{Component}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The mean availability of all downing events for the block, not including preventive, on condition or inspection tasks, during the simulation.&lt;br /&gt;
&lt;br /&gt;
====Block Uptime, &amp;lt;math&amp;gt;{{T}_{Componen{{t}_{UP}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
This is tThe total amount of time that the block was up (i.e., operational) during the simulation.  For component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, this is 279.8212.&lt;br /&gt;
&lt;br /&gt;
====Block Downtime, &amp;lt;math&amp;gt;{{T}_{Componen{{t}_{Down}}}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
This is the average time the component was down for any reason.  For component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, this is 20.1788.&lt;br /&gt;
&lt;br /&gt;
Block Downtime shows the total amount of time that the block was down (i.e., not operational) during the simulation.&lt;br /&gt;
&lt;br /&gt;
===Metrics===&lt;br /&gt;
====RS DECI====&lt;br /&gt;
&lt;br /&gt;
The ReliaSoft Downing Event Criticality Index for the block. This is a relative index showing the percentage of times that a downing event of the block caused the system to go down (i.e., the number of system downing events caused by the block divided by the total number of system downing events).  For component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, this is 63.93%.  This implies that 63.93% of the times that the system went down, the system failure was due to the fact that component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; went down.  This is obtained from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
RSDECI=\frac{Componen{{t}_{NSDE}}}{{{N}_{AL{{L}_{Down}}}}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Mean Time Between Downing Events====&lt;br /&gt;
This is the mean time between downing events of the component, which is computed from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MTBDE=\frac{{{T}_{Componen{{t}_{UP}}}}}{Componen{{t}_{NDE}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, this is 137.3019.&lt;br /&gt;
&lt;br /&gt;
====RS FCI====&lt;br /&gt;
ReliaSoft&#039;s Failure Criticality Index (RS FCI) is a relative index showing the percentage of times that a failure of this component caused a system failure.  For component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, this is 63.93%.  This implies that 63.93% of the times that the system failed, it was due to the fact that component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; failed.  This is obtained from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
RSFCI=\frac{Componen{{t}_{NSDF}}+{{F}_{ZD}}}{{{N}_{F}}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;{{F}_{ZD}}\,\!&amp;lt;/math&amp;gt; is a special counter of system failures not included in &amp;lt;math&amp;gt;Componen{{t}_{NSDF}}\,\!&amp;lt;/math&amp;gt;.  This counter is not explicitly shown in the results but is maintained by the software.  The reason for this counter is the fact that zero duration failures are not counted in &amp;lt;math&amp;gt;Componen{{t}_{NSDF}}\,\!&amp;lt;/math&amp;gt; since they really did not down the system.  However, these zero duration failures need to be included when computing RS FCI.&lt;br /&gt;
&lt;br /&gt;
It is important to note that for both RS DECI and RS FCI, and if overlapping events are present, the component that caused the system event gets credited with the system event.  Subsequent component events that do not bring the system down (since the system is already down) do not get counted in this metric.&lt;br /&gt;
&lt;br /&gt;
====MTBF, &amp;lt;math&amp;gt;MTB{{F}_{C}}\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
Mean time between failures is the mean (average) time between failures of this component, in real clock time.  This is computed from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MTB{{F}_{C}}=\frac{{{T}_{S}}-CFDowntime}{Componen{{t}_{NF}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;CFDowntime\,\!&amp;lt;/math&amp;gt; is the downtime of the component due to failures only (without PM, OC and inspection).  The discussion regarding what is a failure downtime that was presented in the section explaining Mean Availability (w/o PM &amp;amp; Inspection) also applies here.&lt;br /&gt;
For component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, this is 137.3019.  Note that this value could fluctuate for the same component depending on the simulation end time.  As an example, consider the deterministic scenario for this component.  It fails every 100 hours and takes 10 hours to repair.  Thus, it would be failed at 100, repaired by 110, failed at 210 and repaired by 220.  Therefore, its uptime is 280 with two failure events, MTBF = 280/2 = 140.  Repeating the same scenario with an end time of 330 would yield failures at 100, 210 and 320.  Thus, the uptime would be 300 with three failures, or MTBF = 300/3 = 100.  Note that this is not the same as the MTTF (mean time to failure), commonly referred to as MTBF by many practitioners.  &lt;br /&gt;
&lt;br /&gt;
====Mean Downtime per Event, &amp;lt;math&amp;gt;MDPE\,\!&amp;lt;/math&amp;gt;====&lt;br /&gt;
Mean downtime per event is the average downtime for a component event.  This is computed from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MDPE=\frac{{{T}_{Componen{{t}_{Down}}}}}{Componen{{t}_{NDE}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====RS DTCI====&lt;br /&gt;
The ReliaSoft Downtime Criticality Index for the block. This is a relative index showing the contribution of the block to the system’s downtime (i.e., the system downtime caused by the block divided by the total system downtime).&lt;br /&gt;
&lt;br /&gt;
====RS BCCI====&lt;br /&gt;
The ReliaSoft Block Cost Criticality Index for the block. This is a relative index showing the contribution of the block to the total costs (i.e., the total block costs divided by the total costs).&lt;br /&gt;
&lt;br /&gt;
====Non-Waiting Time CI====&lt;br /&gt;
A relative index showing the contribution of repair times to the block’s total downtime. (The ratio of the time that the crew is actively working on the item to the total down time). &lt;br /&gt;
&lt;br /&gt;
====Total Waiting Time CI====&lt;br /&gt;
A relative index showing the contribution of wait factor times to the block’s total downtime. Wait factors include crew conflict times, crew wait times and spare part wait times. (The ratio of downtime not including active repair time). &lt;br /&gt;
&lt;br /&gt;
====Waiting for Opportunity/Maximum Wait Time Ratio====&lt;br /&gt;
A relative index showing the contribution of crew conflict times. This is the ratio of the time spent waiting for the crew to respond (not including crew logistic delays) to the total wait time (not including the active repair time). &lt;br /&gt;
&lt;br /&gt;
====Crew/Part Wait Ratio====&lt;br /&gt;
The ratio of the crew and part delays. A value of 100% means that both waits are equal. A value greater than 100% indicates that the crew delay was in excess of the part delay. For example, a value of 200% would indicate that the wait for the crew is two times greater than the wait for the part.&lt;br /&gt;
&lt;br /&gt;
====Part/Crew Wait Ratio====&lt;br /&gt;
The ratio of the part and crew delays. A value of 100% means that both waits are equal. A value greater than 100% indicates that the part delay was in excess of the crew delay. For example, a value of 200% would indicate that the wait for the part is two times greater than the wait for the crew.&lt;br /&gt;
&lt;br /&gt;
===Downtime Summary===&lt;br /&gt;
====Non-Waiting Time====&lt;br /&gt;
Time that the block was undergoing active maintenance/inspection by a crew. If no crew is defined, then this will return zero.&lt;br /&gt;
&lt;br /&gt;
====Waiting for Opportunity====&lt;br /&gt;
The total downtime for the block due to crew conflicts (i.e., time spent waiting for a crew while the crew is busy with another task). If no crew is defined, then this will return zero. &lt;br /&gt;
&lt;br /&gt;
====Waiting for Crew====&lt;br /&gt;
The total downtime for the block due to crew wait times (i.e., time spent waiting for a crew due to logistical delay). If no crew is defined, then this will return zero. &lt;br /&gt;
&lt;br /&gt;
====Waiting for Parts====&lt;br /&gt;
The total downtime for the block due to spare part wait times. If no spare part pool is defined then this will return zero. &lt;br /&gt;
&lt;br /&gt;
====Other Results of Interest====&lt;br /&gt;
The remaining component (block) results are similar to those defined for the system with the exception that now they apply only to the component.&lt;br /&gt;
&lt;br /&gt;
=Imperfect Repairs= &amp;lt;!-- THIS SECTION HEADER IS LINKED TO: http://help.synthesis8.com/rcm8/tasks.htm. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
{{:Imperfect Repairs}}&lt;br /&gt;
&lt;br /&gt;
=Using Resources: Pools and Crews=&lt;br /&gt;
In order to make the analysis more realistic, one may wish to consider additional sources of delay times in the analysis or study the effect of limited resources.  In the prior examples, we used a repair distribution to identify how long it takes to restore a component.  The factors that one chooses to consider in this time may include the time it takes to do the repair and/or the time it takes to get a crew, a spare part, etc.  While all of these factors may be included in the repair duration, optimized usage of these resources can only be achieved if the resources are studied individually and their dependencies are identified.&lt;br /&gt;
&lt;br /&gt;
As an example, consider the situation where two components in parallel fail at the same time and only a single repair person is available.  Because this person would not be able to execute the repair on both components simultaneously, an additional delay will be encountered that also needs to be included in the modeling.  One way to accomplish this is to assign a specific repair crew to each component.&lt;br /&gt;
&lt;br /&gt;
===Including Crews===&lt;br /&gt;
&lt;br /&gt;
BlockSim allows you to assign maintenance crews to each component and one or more crews may be assigned to each component from the Maintenance Task Properties window.  Note that there may be different crews for each action, (i.e., corrective, preventive, on condition and inspection).&lt;br /&gt;
&lt;br /&gt;
A crew record needs to be defined for each named crew, as shown in the picture below. The basic properties for each crew include factors such as:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* Logistic delays.  How long does it take for the crew to arrive?&lt;br /&gt;
* Is there a limit to the number of tasks this crew can perform at the same time? If yes, how many simultaneous tasks can the crew perform?&lt;br /&gt;
* What is the cost per hour for the crew?&lt;br /&gt;
* What is the cost per incident for the crew?&lt;br /&gt;
&lt;br /&gt;
[[Image:8.16.png|center|518px|link=]]&lt;br /&gt;
&lt;br /&gt;
===Illustrating Crew Use===&lt;br /&gt;
To illustrate the use of crews in BlockSim, consider the deterministic scenario described by the following RBD and properties.&lt;br /&gt;
&lt;br /&gt;
[[Image:r12.png|center|350px|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Unit&lt;br /&gt;
! Failure&lt;br /&gt;
! Repair&lt;br /&gt;
! Crew&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;100\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;10\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; : Delay = 20, Single Task&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;120\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;20\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; : Delay = 20, Single Task&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;140\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;20\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; : Delay = 20, Single Task&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;160\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;10\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; : Delay = 20, Single Task&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:BS8.17.png|center|600px|link=]]&lt;br /&gt;
&lt;br /&gt;
As shown in the figure above, the System Up/Down plot illustrates the sequence of events, which are:&lt;br /&gt;
&lt;br /&gt;
::#At 100, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; fails.  It takes 20 to get the crew and 10 to repair, thus the component is repaired by 130.  The system is failed/down during this time.  &lt;br /&gt;
::#At 150, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails since it would have accumulated an operating age of 120 by this time.  It again has to wait for the crew and is repaired by 190.  &lt;br /&gt;
::#At 170, &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; fails.  Upon this failure, &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; requests the only available crew.  However, this crew is currently engaged by &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and, since the crew can only perform one task at a time, it cannot respond immediately to the request by &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;.  Thus, &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; will remain failed until the crew becomes available.  The crew will finish with unit &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; at 190 and will then be dispatched to &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;.  Upon dispatch, the logistic delay will again be considered and &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; will be repaired by 230.  The system continues to operate until the failures of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; overlap (i.e., the system is down from 170 to 190)&lt;br /&gt;
::#At 210, &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; fails.  It again has to wait for the crew and repair.&lt;br /&gt;
::#&amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; is up at 260.&lt;br /&gt;
The following figure shows an example of some of the possible crew results (details), which are presented next.  &lt;br /&gt;
&lt;br /&gt;
[[Image:BS8.18.png|thumb|center|500px|Crew results shown in the BlockSim&#039;s Simulation Results Explorer.|link=]]&lt;br /&gt;
&lt;br /&gt;
====Explanation of the Crew Details====&lt;br /&gt;
::#Each request made to a crew is logged.  &lt;br /&gt;
::#If a request is successful (i.e., the crew is available), the call is logged once in the Calls Received counter and once in the Accepted Calls counter.  &lt;br /&gt;
::#If a request is not accepted (i.e., the crew is busy), the call is logged once in the Calls Received counter and once in the Rejected Calls counter.  When the crew is free and can be called upon again, the call is logged once in the Calls Received counter and once in the Accepted Calls counter.&lt;br /&gt;
::#In this scenario, there were two instances when the crew was not available, Rejected Calls = 2, and there were four instances when the crew performed an action, Calls Accepted = 4, for a total of six calls, Calls Received = 6.&lt;br /&gt;
::#Percent Accepted and Percent Rejected are the ratios of calls accepted and calls rejected with respect to the total calls received.&lt;br /&gt;
::#Total Utilization is the total time that the crew was used.  It includes both the time required to complete the repair action and the logistic time.  In this case, this is 140, or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{T}_{{{R}_{A}}}}= &amp;amp; 10,{{T}_{{{L}_{A}}}}=20 \\ &lt;br /&gt;
	  {{T}_{{{R}_{B}}}}= &amp;amp; 20,{{T}_{{{L}_{B}}}}=20 \\ &lt;br /&gt;
	  {{T}_{{{R}_{C}}}}= &amp;amp; 20,{{T}_{{{L}_{C}}}}=20 \\ &lt;br /&gt;
	  {{T}_{{{R}_{D}}}}= &amp;amp; 10,{{T}_{{{L}_{D}}}}=20 \\ &lt;br /&gt;
	  {{T}_{U}}= &amp;amp; \left( {{T}_{{{R}_{A}}}}+{{T}_{{{L}_{A}}}} \right)+\left( {{T}_{{{R}_{B}}}}+{{T}_{{{L}_{B}}}} \right) \\ &lt;br /&gt;
	   &amp;amp; +\left( {{T}_{{{R}_{C}}}}+{{T}_{{{L}_{C}}}} \right)+\left( {{T}_{{{R}_{D}}}}+{{T}_{{{L}_{D}}}} \right) \\ &lt;br /&gt;
	  {{T}_{U}}= &amp;amp; 140  &lt;br /&gt;
	\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:::6.  Average Call Duration is the average duration of each crew usage, and it also includes both logistic and repair time.  It is the total usage divided by the number of accepted calls.  In this case, this is 35.&lt;br /&gt;
:::7.  Total Wait Time is the time that blocks in need of a repair waited for this crew.  In this case, it is 40 ( &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; both waited 20 each).  &lt;br /&gt;
:::8.  Total Crew Costs are the total costs for this crew.  It includes the per incident charge as well as the per unit time costs.  In this case, this is 180.  There were four incidents at 10 each for a total of 40, as well as 140 time units of usage at 1 cost unit per time unit.&lt;br /&gt;
:::9.  Average Cost per Call is the total cost divided by the number of accepted calls.  In this case, this is 45.&lt;br /&gt;
&lt;br /&gt;
Note that crew costs that are attributed to individual blocks can be obtained from the Blocks reports, as shown in the figure below. &lt;br /&gt;
&lt;br /&gt;
[[Image:BS8.19.png|thumb|center|650px|Allocation of crew costs.|link=]]&lt;br /&gt;
&lt;br /&gt;
====How BlockSim Handles Crews====&lt;br /&gt;
::#Crew logistic time is added to each repair time.  &lt;br /&gt;
::#The logistic time is always present, and the same, regardless of where the crew was called from (i.e., whether the crew was at another job or idle at the time of the request).&lt;br /&gt;
::#For any given simulation, each crew&#039;s logistic time is constant (taken from the distribution) across that single simulation run regardless of the task  (CM, PM or inspection).&lt;br /&gt;
::#A crew can perform either a finite number of simultaneous tasks or an infinite number.  &lt;br /&gt;
::#If the finite limit of tasks is reached, the crew will not respond to any additional request until the number of tasks the crew is performing is less than its finite limit.&lt;br /&gt;
::#If a crew is not available to respond, the component will &amp;quot;wait&amp;quot; until a crew becomes available.&lt;br /&gt;
::#BlockSim maintains the queue of rejected calls and will dispatch the crew to the next repair on a &amp;quot;first come, first served&amp;quot; basis.&lt;br /&gt;
::#Multiple crews can be assigned to a single block (see overview in the next section).&lt;br /&gt;
::#If no crew has been assigned for a block, it is assumed that no crew restrictions exist and a default crew is used.  The default crew can perform an infinite number of simultaneous tasks and has no delays or costs.&lt;br /&gt;
&lt;br /&gt;
====Looking at Multiple Crews====&lt;br /&gt;
Multiple crews may be available to perform maintenance for a particular component.  When multiple crews have been assigned to a block in BlockSim, the crews are assigned to perform maintenance based on their order in the crew list, as shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:r23.png||thumb|center|500px|A single component with two corrective maintenance crews assigned to it.|link=]]&lt;br /&gt;
&lt;br /&gt;
In the case where more than one crew is assigned to a block, and if the first crew is unavailable, then the next crew is called upon and so forth.  As an example, consider the prior case but with the following modifications (i.e.,  Crews &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; are assigned to all blocks):&lt;br /&gt;
&lt;br /&gt;
[[Image:r8.png|center|400px|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Unit&lt;br /&gt;
! Failure&lt;br /&gt;
! Repair&lt;br /&gt;
! Crew&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;100\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;10\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;A,B\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;120\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
| &amp;lt;math&amp;gt;20\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;A,B\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;140\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;20\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;A,B\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;160\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;10\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;A,B\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; ; Delay = 20, Single Task&lt;br /&gt;
|-&lt;br /&gt;
| Crew &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; ; Delay = 30, Single Task&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The system would behave as shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:r13.png|center|550px|link=]]&lt;br /&gt;
 &lt;br /&gt;
In this case, Crew &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; was used for the &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; repair since Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was busy.  On all others, Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was used.  It is very important to note that once a crew has been assigned to a task it will complete the task.  For example, if we were to change the delay time for Crew &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; to 100, the system behavior would be as shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:r14.png|center|550px|System up/down plot with the delay time for Crew B changed to 100.|link=]]&lt;br /&gt;
&lt;br /&gt;
In other words, even though Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would have finished the repair on &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; more quickly if it had been available when originally called, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; was assigned the task because &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was not available at the instant that the crew was needed.&lt;br /&gt;
&lt;br /&gt;
===Additional Rules on Crews===&lt;br /&gt;
&lt;br /&gt;
::1.  If all assigned crews are engaged, the next crew that will be chosen is the crew that can get there first.  &lt;br /&gt;
:::a)	This accounts for the time it would take a particular crew to complete its current task (or all tasks in its queue) and its logistic time.&lt;br /&gt;
::2.  If a crew is available, it gets used regardless of what its logistic delay time is.  &lt;br /&gt;
:::a)	In other words, if a crew with a shorter logistic time is busy, but almost done, and another crew with a much higher logistic time is currently free, the free one will get assigned to the task.&lt;br /&gt;
::3.  For each simulation each crew&#039;s logistic time is computed (taken randomly from its distribution or its fixed time) at the beginning of the simulation and remains constant across that one simulation for all actions (CM, PM and inspection).&lt;br /&gt;
&lt;br /&gt;
===Using Spare Part Pools===&lt;br /&gt;
&lt;br /&gt;
BlockSim also allows you to specify spare part pools (or depots). Spare part pools allow you to model and manage spare part inventory and study the effects associated with limited inventories.  Each component can have a spare part pool associated with it. If a spare part pool has not been defined for a block, BlockSim&#039;s analysis assumes a default pool of infinite spare parts. To speed up the simulation, no details on pool actions are kept during the simulation if the default pool is used.&lt;br /&gt;
&lt;br /&gt;
Pools allow you to define multiple aspects of the spare part process, including stock levels, logistic delays and restock options. Every time a part is repaired under a CM or scheduled action (PM, OC and Inspection), a spare part is obtained from the pool. If a part is available in the pool, it is then used for the repair.  Spare part pools perform their actions based on the simulation clock time.  &lt;br /&gt;
  &lt;br /&gt;
====Spare Properties====&lt;br /&gt;
&lt;br /&gt;
A spare part pool is identified by a name.  The general properties of the pool are its stock level (must be greater than zero), cost properties and logistic delay time.  If a part is available (in stock), the pool will dispense that part to the requesting block after the specified logistic time has elapsed.  One needs to think of a pool as an independent entity.  It accepts requests for parts from blocks and dispenses them to the requesting blocks after a given logistic time.  Requests for spares are handled on a first come, first served basis.  In other words, if two blocks request a part and only one part is in stock, the first block that made the request will receive the part.  Blocks request parts from the pool immediately upon the initiation of a CM or scheduled event (PM, OC and Inspection).&lt;br /&gt;
&lt;br /&gt;
====Restocking the Pool====&lt;br /&gt;
&lt;br /&gt;
If the pool has a finite number of spares, restock actions may be incorporated. The figure below shows the restock properties. Specifically, a pool can restock itself either through a scheduled restock action or based on specified conditions.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS8.24.png|center|500px|link=]]&lt;br /&gt;
&lt;br /&gt;
A scheduled restock action adds a set number of parts to the pool on a predefined scheduled part arrival time. For the settings in the figure above, one spare part would be added to the pool every 100 hours, based on the system (simulation) time. In other words, for a simulation of 1,000 hours, a spare part would arrive at 100 hours, 200 hours, etc.  The part is available to the pool immediately after the restock action and without any logistic delays.  &lt;br /&gt;
&lt;br /&gt;
In an on-condition restock, a restock action is initiated when the stock level reaches (or is below) a specified value.  In figure above, five parts are ordered when the stock level reaches 0.  Note that unlike the scheduled restock, parts added through on-condition restock become available after a specified logistic delay time.  In other words, when doing a scheduled restock, the parts are pre-ordered and arrive when needed.  Whereas in the on-condition restock, the parts are ordered when the condition occurs and thus arrive after a specified time.  For on-condition restocks, the condition is triggered if and only if the stock level drops to or below the specified stock level, regardless of how the spares arrived to the pool or were distributed by the pool.  In addition, the restock trigger value must be less than the initial stock.&lt;br /&gt;
&lt;br /&gt;
Lastly, a maximum capacity can be assigned to the pool.  If the maximum capacity is reached, no more restock actions are performed.  This maximum capacity must be equal to or greater than the initial stock.  When this limit is reached, no more items are added to the pool.  For example, if the pool has a maximum capacity of ten and a current stock level of eight and if a restock action is set to add five items to the pool, then only two will be accepted.&lt;br /&gt;
&lt;br /&gt;
====Obtaining Emergency Spares====&lt;br /&gt;
&lt;br /&gt;
Emergency restock actions can also be defined.  The figure below illustrates BlockSim&#039;s Emergency Spare Provisions options.  An emergency action is triggered only when a block requests a spare and the part is not currently in stock.  This is the only trigger condition.  It does not account for whether a part has been ordered or if one is scheduled to arrive.  Emergency spares are ordered when the condition is triggered and arrive after a time equal to the required time to obtain emergency spare(s).&lt;br /&gt;
&lt;br /&gt;
[[Image:BS8.25.png|center|500px|link=]]&lt;br /&gt;
&lt;br /&gt;
===Summary of Rules for Spare Part Pools===&lt;br /&gt;
&lt;br /&gt;
The following rules summarize some of the logic when dealing with spare part pools.  &lt;br /&gt;
&lt;br /&gt;
====Basic Logic Rules====&lt;br /&gt;
&lt;br /&gt;
::1.  &#039;&#039;&#039;Queue Based&#039;&#039;&#039;: Requests for spare parts from blocks are queued and executed on a &amp;quot;first come, first served&amp;quot; basis.&lt;br /&gt;
::2.  &#039;&#039;&#039;Emergency&#039;&#039;&#039;: Emergency restock actions are performed only when a part is not available.&lt;br /&gt;
::3.  &#039;&#039;&#039;Scheduled Restocks&#039;&#039;&#039;: Scheduled restocks are added instantaneously to the pool at the scheduled time.&lt;br /&gt;
::4.  &#039;&#039;&#039;On-Condition Restock&#039;&#039;&#039;: On-condition restock happens when the specified condition is reached (e.g., when the stock drops to two or if a request is received for a part and the stock is below the restock level).&lt;br /&gt;
:::a)	For example, if a pool has three items in stock and it dispenses one, an on-condition restock is initiated the instant that the request is received (without regard to the logistic delay time).  The restocked items will be available after the required time for stock arrival has elapsed.&lt;br /&gt;
:::b)	The way that this is defined allows for the possibility of multiple restocks.  Specifically, every time a part needs to be dispensed and the stock is lower than the specified quantity, parts are ordered.  In the case of a long logistic delay time, it is possible to have multiple re-orders in the queue.&lt;br /&gt;
::5.  &#039;&#039;&#039;Parts Become Available after Spare Acquisition Logistic Delay&#039;&#039;&#039;:  If there is a spare acquisition logistic time delay,  the requesting block will get the part after that delay.  &lt;br /&gt;
:::a)	For example, if a block with a repair duration of 10 fails at 100 and requests a part from a pool with a logistic delay time of 10, that block will not be up until 120.&lt;br /&gt;
::6.  &#039;&#039;&#039;Compound Delays&#039;&#039;&#039;: If a part is not available and an emergency part (or another part) can be obtained, then the total wait time for the part is the sum of both the logistic time and the required time to obtain a spare.&lt;br /&gt;
::7.  &#039;&#039;&#039;First Available Part is Dispensed to the First Block in the Queue&#039;&#039;&#039;: The pool will dispense a requested part if it has one in stock or when it becomes available, regardless of what action (i.e., as needed restock or emergency restock) that request may have initiated.  &lt;br /&gt;
:::a)	For example, if Block A requests a part from a pool and that triggers an emergency restock action, but a part arrives before the emergency restock through another action (e.g., scheduled restock), then the pool will dispense the newly arrived part to Block A (if Block A is next in the queue to receive a part).&lt;br /&gt;
::8.  &#039;&#039;&#039;Blocks that Trigger an Action Get Charged with the Action&#039;&#039;&#039;: A block that triggers an emergency restock is charged for the additional cost to obtain the emergency part, even if it does not use an emergency part (i.e., even if another part becomes available first).&lt;br /&gt;
::9.	&#039;&#039;&#039;Triggered Action Cannot be Canceled.&#039;&#039;&#039;  If a block triggers a restock action but then receives a part from another source, the action that the block triggered is not canceled.&lt;br /&gt;
:::a)	For example, if Block A initiates an emergency restock action but was then able to use a part that became available through other actions, the emergency request is not canceled and an emergency spare part will be added to the pool&#039;s stock level.  &lt;br /&gt;
:::b)	Another way to explain this is by looking at the part acquisition logistic times as transit times.  Because an ordered part is en-route to you after you order it, you will receive it regardless of whether the conditions have changed and you no longer need it.&lt;br /&gt;
&lt;br /&gt;
===Simultaneous Dispatch of Crews and Parts Logic===&lt;br /&gt;
&lt;br /&gt;
Some special rules apply when a block has both logistic delays in acquiring parts from a pool and when waiting for crews.  BlockSim dispatches requests for crews and spare parts simultaneously.  The repair action does not start until both crew and part arrive, as shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:r18.png|center|400px|link=]]&lt;br /&gt;
 &lt;br /&gt;
If a crew arrives and it has to wait for a part, then this time (and cost) is added to the crew usage time.&lt;br /&gt;
&lt;br /&gt;
===Example Using Both Crews and Pools===&lt;br /&gt;
&lt;br /&gt;
Consider the following example, using both crews and pools.&lt;br /&gt;
&lt;br /&gt;
[[Image:r19.png|center|300px|link=]]&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
[[Image:r20.png|center|400px|link=]]&lt;br /&gt;
&lt;br /&gt;
And the crews are:&lt;br /&gt;
&lt;br /&gt;
[[Image:r21.png|center|400px|link=]]&lt;br /&gt;
&lt;br /&gt;
While the spare pool is: &lt;br /&gt;
&lt;br /&gt;
[[Image:r22.png|center|500px|link=]]&lt;br /&gt;
&lt;br /&gt;
The behavior of this system from 0 to 300 is shown graphically in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.26.png|center|600px|link=]]&lt;br /&gt;
&lt;br /&gt;
The discrete system events during that time are as follows:&lt;br /&gt;
&lt;br /&gt;
::1.	Component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;  fails at 100 and Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is engaged.  &lt;br /&gt;
&lt;br /&gt;
:::a)	At 110, Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; arrives and completes the repair by 120.  &lt;br /&gt;
:::b)	This repair uses the only spare part in inventory and triggers an on-condition restock.  A part is ordered and is scheduled to arrive at 160.&lt;br /&gt;
:::c)	A scheduled restock part is also set to arrive at 150.&lt;br /&gt;
:::d)	Pool [on-hand = 0, pending: 150, 160].&lt;br /&gt;
::2.	Component &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails at 121.  Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is available and it is engaged.  &lt;br /&gt;
:::a)	Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; arrives by 131 but no part is available.  &lt;br /&gt;
:::b)	The failure finds the pool with no parts, triggering the on-condition restock.  A part was ordered and is scheduled to arrive at 181.&lt;br /&gt;
:::c)	Pool [on-hand = 0, pending: 150, 160, 181].&lt;br /&gt;
:::d)	At 150, the first part arrives and is used by Component &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:::e)	Repair on Component &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is completed 20 time units later, at 170.&lt;br /&gt;
:::f)	Pool [on-hand=0, pending: 160, 181].&lt;br /&gt;
::3.	Component &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; fails at 122.  Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is already engaged by Component &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, thus Crew &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is engaged.  &lt;br /&gt;
:::a)	Crew &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; arrives at 137 but no part is available.&lt;br /&gt;
:::b)	The failure finds the pool with no parts, triggering the on-condition restock.  A part is ordered and is scheduled to arrive at 182.&lt;br /&gt;
:::c)	Pool [on-hand = 0, pending: 160, 181,182].&lt;br /&gt;
:::d)	At 160, the part arrives and Component &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is repaired by 180.  &lt;br /&gt;
:::e)	Pool [on-hand = 0, pending: 181,182].&lt;br /&gt;
::4.	Component &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; fails at 123.  No crews are available until 170 when Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; becomes available.&lt;br /&gt;
:::a)	Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; arrives by 180 and has to wait for a part.&lt;br /&gt;
:::b)	The failure found the pool with no parts, triggering the on-condition restock.  A part is ordered and is scheduled to arrive at 183.&lt;br /&gt;
:::c)	Pool [on-hand = 0, pending: 181,182, 183].&lt;br /&gt;
:::d)	At 181, a part is obtained.&lt;br /&gt;
:::e)	By 201, the repair is completed.&lt;br /&gt;
:::f)	Pool [on-hand = 0, pending: 182, 183]&lt;br /&gt;
::5.	Component &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; fails at 171 with no crew available.  &lt;br /&gt;
:::a)	Crew &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; becomes available at 180 and arrives by 195.  &lt;br /&gt;
:::b)	The failure finds the pool with no parts, triggering the on-condition restock.  A part is ordered and is scheduled to arrive at 231.&lt;br /&gt;
:::c)	The next part becomes available at 182 and the repair is completed by 205.&lt;br /&gt;
:::d)	Pool [on-hand = 0, pending: 183, 231]&lt;br /&gt;
::6.	End time is at 300.  The last scheduled part arrives at the pool at 300.&lt;br /&gt;
&lt;br /&gt;
=Using Maintenance Tasks=&lt;br /&gt;
One of the most important benefits of simulation is the ability to define how and when actions are performed.  In our case, the actions of interest are part repairs/replacements.  This is accomplished in BlockSim through the use of maintenance tasks.  Specifically, four different types of tasks can be defined for maintenance actions: corrective maintenance, preventive maintenance, on condition maintenance and inspection.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Corrective Maintenance Tasks===&lt;br /&gt;
A corrective maintenance task defines when a corrective maintenance (CM) action is performed. The figure below shows a corrective maintenance task assigned to a block in BlockSim. Corrective actions will be performed either immediately upon failure of the item or upon finding that the item has failed (for hidden failures that are not detected until an inspection).  BlockSim allows the selection of either category.  &lt;br /&gt;
*&#039;&#039;&#039;Upon item failure&#039;&#039;&#039;: The CM action is initiated immediately upon failure.  If the user doesn&#039;t specify the choice for a CM, then this is the default option.  All prior examples were based on the instruction to perform a CM upon failure.  &lt;br /&gt;
*&#039;&#039;&#039;When found failed during an Inspection&#039;&#039;&#039;: The CM action will only be initiated after an inspection is done on the failed component. How and when the inspections are performed is defined by the block&#039;s inspection properties. This has the effect of defining a dependency between the corrective maintenance task and the inspection task.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Image:r23.png|center|500px|link=]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;noprint&amp;quot;&amp;gt;&lt;br /&gt;
{{Examples Box|BlockSim Examples|&amp;lt;p&amp;gt;More application examples are available! See also:&amp;lt;/p&amp;gt; {{Examples Link|BlockSim_Example:_CM_Triggered_by_Subsystem_Down|CM Triggered by Subsystem Down}}}}&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Scheduled Tasks===&lt;br /&gt;
Scheduled tasks can be performed on a known schedule, which can be based on any of the following:&lt;br /&gt;
* A time interval, either fixed or dynamic, based on the item&#039;s age (item clock) or on calendar time (system clock). See [[#Item and System Ages|Item and System Ages]].&lt;br /&gt;
* The occurrence of certain events, including:&lt;br /&gt;
**The system goes down. &lt;br /&gt;
**Certain events happen in a maintenance group. The events and groups are user-specified, and the item that the task is assigned to does not need to be part of the selected maintenance group(s).&lt;br /&gt;
&lt;br /&gt;
The types of scheduled tasks include:&lt;br /&gt;
*Inspection tasks&lt;br /&gt;
*Preventive maintenance tasks&lt;br /&gt;
*On condition tasks&lt;br /&gt;
&lt;br /&gt;
====Item and System Ages====&lt;br /&gt;
It is important to keep in mind that the system and each component of the system maintain separate clocks within the simulation. When setting intervals to perform a scheduled task, the intervals can be based on either type of clock. Specifically:&lt;br /&gt;
*Item age refers to the accumulated age of the block, which gets adjusted each time the block is repaired (i.e., restored). If the block is repaired at least once during the simulation, this will be different from the elapsed simulation time. For example, if the restoration factor is 1 (i.e., “as good as new”) and the assigned interval is 100 days based on item age, then the task will be scheduled to be performed for the first time at 100 days of elapsed simulation time. However, if the block fails at 85 days and it takes 5 days to complete the repair, then the block will be fully restored at 90 days and its accumulated age will be reset to 0 at that point. Therefore, if another failure does not occur in the meantime, the task will be performed for the first time 100 days later at 190 days of elapsed simulation time.&lt;br /&gt;
&lt;br /&gt;
[[Image:Updown_item_age.png|center|450px|link=]]&lt;br /&gt;
&lt;br /&gt;
*Calendar time refers to the elapsed simulation time. If the assigned interval is 100 days based on calendar time, then the task will be performed for the first time at 100 days of elapsed simulation time, for the second time at 200 days of elapsed simulation time and so on, regardless of whether the block fails and gets repaired correctively between those times.&lt;br /&gt;
&lt;br /&gt;
[[Image:Updown_system_age.png|center|450px|link=]]&lt;br /&gt;
&lt;br /&gt;
====Inspection Tasks====&lt;br /&gt;
Like all scheduled tasks, inspections can be performed based on a time interval or upon certain events.  Inspections can be specified to bring the item or system down or not.&lt;br /&gt;
&lt;br /&gt;
====Preventive Maintenance Tasks====&lt;br /&gt;
The figure below shows the options available in a preventive maintenance (PM) task within BlockSim.  PMs can be performed based on a time interval or upon certain events.  Because PM tasks always bring the item down, one can also specify whether preventive maintenance will be performed if the task brings the system down.&lt;br /&gt;
&lt;br /&gt;
[[Image:r25.png|center|556px|link=]]&lt;br /&gt;
&lt;br /&gt;
====On Condition Tasks====&lt;br /&gt;
On condition maintenance relies on the capability to detect failures before they happen so that preventive maintenance can be initiated. If, during an inspection, maintenance personnel can find evidence that the equipment is approaching the end of its life, then it may be possible to delay the failure, prevent it from happening or replace the equipment at the earliest convenience rather then allowing the failure to occur and possibly cause severe consequences. In BlockSim, on condition tasks consist of an inspection task that triggers a preventive task when an impending failure is detected during inspection. &lt;br /&gt;
=====Failure Detection=====&lt;br /&gt;
Inspection tasks can be used to check for indications of an approaching failure.  BlockSim models such indications of when an approaching failure will become detectable upon inspection using Failure Detection Threshold and P-F Interval.  Failure detection threshold allows the user to enter a number between 0 and 1 indicating the percentage of an item&#039;s life that must elapse before an approaching failure can be detected.  For instance, if the failure detection threshold value is set as 0.8 then this means that the failure of a component can be detected only during the last 20% of its life.  If an inspection occurs during this time, an approaching failure is detected and the inspection triggers a preventive maintenance task to take the necessary precautions to delay the failure by either repairing or replacing the component.&lt;br /&gt;
&lt;br /&gt;
The P-F interval allows the user to enter the amount of time before the failure of a component when the approaching failure can be detected by an inspection.  The P-F interval represents the warning period that spans from P(when a potential failure can be detected) to F(when the failure occurs).  If a P-F interval is set as 200 hours, then the approaching failure of the component can only be detected at 200 hours before the failure of the component.  Thus, if a component has a fixed life of 1,000 hours and the P-F interval is set to 200 hours, then if an inspection occurs at or beyond 800 hours, then the approaching failure of the component that is to occur at 1,000 hours is detected by this inspection and a preventive maintenance task is triggered to take action against this failure.&lt;br /&gt;
&lt;br /&gt;
=====Rules for On Condition Tasks=====&lt;br /&gt;
&lt;br /&gt;
*An inspection that finds a block at or beyond the failure detection threshold or within the range of the P-F interval will trigger the associated preventive task as long as preventive maintenance can be performed on that block.&lt;br /&gt;
&lt;br /&gt;
*If a non-downing inspection triggers a preventive maintenance action because the failure detection threshold or P-F interval range was reached, no other maintenance task will be performed between the inspection and the triggered preventive task; tasks that would otherwise have happened at that time due to system age, system down or group maintenance will be ignored.&lt;br /&gt;
&lt;br /&gt;
*A preventive task that would have been triggered by a non-downing inspection will not happen if the block fails during the inspection, as corrective maintenance will take place instead.&lt;br /&gt;
&lt;br /&gt;
*If a failure will occur within the failure detection threshold or P-F interval set for the inspection, but the preventive task is only supposed to be performed when the system is down, the simulation waits until the requirements of the preventive task are met to perform the preventive maintenance.&lt;br /&gt;
&lt;br /&gt;
*If the on condition inspection triggers the preventive maintenance part of the task, the simulation assumes that the maintenance crew will forego any routine servicing associated with the inspection part of the task. In other words, the restoration will come from the preventive maintenance, so any restoration factor defined for the inspection will be ignored in these circumstances.&lt;br /&gt;
&lt;br /&gt;
=====Example Using P-F Interval=====&lt;br /&gt;
&lt;br /&gt;
To illustrate the use of the P-F interval in BlockSim, consider a component &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; that fails every 700 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;.  The corrective maintenance on this equipment takes 100 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; to complete, while the preventive maintenance takes 50 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; to complete.  Both the corrective and preventive maintenance actions have a type II restoration factor of 1.  Inspection tasks of 10 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; duration are performed on the component every 300 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;.  There is no restoration of the component during the inspections.  The P-F interval for this component is 100 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The component behavior from 0 to 2000 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; is shown in the figure below and described next.&lt;br /&gt;
&lt;br /&gt;
::#At 300 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; the first scheduled inspection of 10 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; duration occurs.  At this time the age of the component is 300 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;.  This inspection does not lie in the P-F interval of 100 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; (which begins at the age of 600 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; and ends at the age of 700 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;).  Thus, no approaching failure is detected during this inspection.&lt;br /&gt;
::#At 600 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; the second scheduled inspection of 10 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; duration occurs.  At this time the age of the component is 590 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; (no age is accumulated during the first inspection from 300 tu to 310 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; as the component does not operate during this inspection).  Again this inspection does not lie in the P-F interval.  Thus, no approaching failure is detected during this inspection.&lt;br /&gt;
::#At 720 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; the component fails after having accumulated an age of 700 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;.  A corrective maintenance task of 100 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; duration occurs to restore the component to as-good-as-new condition.&lt;br /&gt;
::#At 900 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; the third scheduled inspection occurs.  At this time the age of the component is 80 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;.  This inspection does not lie in the P-F interval (from age 600 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; to 700 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;).  Thus, no approaching failure is detected during this inspection.&lt;br /&gt;
::#At 1200 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; the fourth scheduled inspection occurs.  At this time the age of the component is 370 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;.  Again, this inspection does not lie in the P-F interval and no approaching failure is detected.&lt;br /&gt;
::#At 1500 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; the fifth scheduled inspection occurs.  At this time the age of the component is 660 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;, which lies in the P-F interval.  As a result, an approaching failure is detected and the inspection triggers a preventive maintenance task.  A preventive maintenance task of 50 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; duration occurs at 1510 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; to restore the component to as-good-as-new condition.&lt;br /&gt;
::#At 1800 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; the sixth scheduled inspection occurs.  At this time the age of the component is 240 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;.  This inspection does not lie in the P-F interval (from age 600 tu to 700 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;) and no approaching failure is detected.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS8.32.png|center|600px|link=]]&lt;br /&gt;
&lt;br /&gt;
====Rules for PMs and Inspections====&lt;br /&gt;
&lt;br /&gt;
All the options available in the Maintenance task window were designed to maximize the modeling flexibility within BlockSim.  However, maximizing the modeling flexibility introduces issues that you need to be aware of and requires you to carefully select options in order to assure that the selections do not contradict one another.  One obvious case would be to define a PM action on a component in series (which will always bring the system down) and then assign a PM policy to the block that has the Do not perform maintenance if the action brings the system down option set.  With these settings, no PMs will ever be performed on the component during the BlockSim simulation.  The following sections summarize some issues and special cases to consider when defining maintenance properties in BlockSim.&lt;br /&gt;
&lt;br /&gt;
::#Inspections do not consume spare parts.  However, an inspection can have a renewal effect on the component if the restoration factor is set to a number other than the default of 0.&lt;br /&gt;
::#On the inspection tab, if Inspection brings system down is selected, this also implies that the inspection brings the item down.&lt;br /&gt;
::#If a PM or an inspection are scheduled based on the item&#039;s age, then they will occur exactly when the item reaches that age.  However, it is important to note that failed items do not age.  Thus, if an item fails before it reaches that age, the action will not be performed.  This means that if the item fails before the scheduled inspection (based on item age) and the CM is set to be performed upon inspection, the CM will never take place.  The reason that this option is allowed in BlockSim is for the flexibility of specifying renewing inspections.&lt;br /&gt;
::#Downtime due to a failure discovered during a non-downing inspection is included when computing results &amp;quot;w/o PM, OC &amp;amp; Inspections.&amp;quot;&lt;br /&gt;
::#If a PM upon item age is scheduled and is not performed because it brings the system down (based on the option in the PM task) the PM will not happen unless the item reaches that age again (after restoration by CM, inspection or another type of PM).&lt;br /&gt;
::#If the CM task is upon inspection and a failed component is scheduled for PM prior to the inspection, the PM action will restore the component and the CM will not take place.&lt;br /&gt;
::#In the case of simultaneous events, only one event is executed (except the case in maintenance phase, in maintenance phase, all simultaneous events in maintenance phase are executed in a order). The following precedence order is used: 1). Tasks based on intervals or upon start of a maintenance phase; 2). Tasks based on events in a maintenance group, where the triggering event applies to a block; 3). Tasks based on system down; 4). Tasked on events in a maintenance group, where the triggering event applies to a subdiagram. Within these categories, order is determined according to the priorities specified in the URD (i.e., the higher the task in on the list, the higher the priority).&lt;br /&gt;
::#The PM option of Do not perform if it brings the system down is only considered at the time that the PM needs to be initiated.  If the system is down at that time, due to another item, then the PM will be performed regardless of any future consequences to the system up state.  In other words, when the other item is fixed, it is possible that the system will remain down due to this PM action.  In this case, the PM time difference is added to the system PM downtime.  &lt;br /&gt;
::#Downing events cannot overlap. If a component is down due to a PM and another PM is suggested based on another trigger, the second call is ignored.&lt;br /&gt;
::#A non-downing inspection with a restoration factor restores the block based on the age of the block at the beginning of the inspection (i.e., duration is not restored). &lt;br /&gt;
::#Non-downing events can overlap with downing events.  If in a non-downing inspection and a downing event happen concurrently, the non-downing event will be managed in parallel with the downing event.&lt;br /&gt;
::#If a failure or PM occurs during a non-downing inspection and the CM or PM has a restoration factor and the inspection action has a restoration factor, then both restoration factors are used (compounded).&lt;br /&gt;
::#A PM or inspection on system down is triggered only if the system was up at the time that the event brought the system down.&lt;br /&gt;
::#A non-downing inspection with restoration factor of 0 does not affect the block.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
&lt;br /&gt;
To illustrate the use of maintenance policies in BlockSim we will use the same example from [[Repairable_Systems_Analysis_Through_Simulation#Example_Using_Both_Crews_and_Pools|Example Using Both Crews and Pools]] with the following modifications (The figures below also show these settings): &lt;br /&gt;
&lt;br /&gt;
Blocks A and D: &lt;br /&gt;
#Belong to the same group (Group 1).&lt;br /&gt;
#Corrective maintenance actions are upon inspection (not upon failure) and the inspections are performed every 30 hours, based on system time. Inspections have a duration of 1 hour. Furthermore, unlimited free crews are available to perform the inspections.&lt;br /&gt;
#Whenever either item get CM, the other one gets a PM.&lt;br /&gt;
#The PM has a fixed duration of 10 hours.&lt;br /&gt;
#The same crews are used for both corrective and preventive maintenance actions.&lt;br /&gt;
&lt;br /&gt;
[[Image:r29.png|center|650px| CM and Inspection settings for blocks A and D | link= ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:r29b.png|center|650px| CM and Inspection settings for blocks A and D | link= ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:r30.png|center|650px| PM settings for blocks A and D | link= ]]&lt;br /&gt;
&lt;br /&gt;
====System Overview====&lt;br /&gt;
&lt;br /&gt;
The item and system behavior from 0 to 300 hours is shown in the figure below and described next. &lt;br /&gt;
&lt;br /&gt;
[[Image:BS8.35.png|center|600px|link=]]&lt;br /&gt;
&lt;br /&gt;
::1.  At 100, block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; goes down and brings the system down.  &lt;br /&gt;
:::a)	No maintenance action is performed since an upon inspection policy was used.&lt;br /&gt;
:::b)	The next scheduled inspection is at 120, thus Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is called to perform the maintenance by 121 (end of the inspection).&lt;br /&gt;
::2.  Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; arrives and initiates the repair on &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at 131.&lt;br /&gt;
:::a)	The only part in the pool is used and an on-condition restock is triggered.&lt;br /&gt;
:::b)	Pool [on-hand = 0, pending: 150 &amp;lt;math&amp;gt;_{s}\,\!&amp;lt;/math&amp;gt;, 181].&lt;br /&gt;
:::c)	Block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is repaired by 141.&lt;br /&gt;
::3.	At the same time (121), a PM is initiated for block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; because the PM task called for &amp;quot;PM upon the start of corrective maintenance on another group item.&amp;quot;&lt;br /&gt;
:::a)	Crew &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is called for block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; and arrives at 136.&lt;br /&gt;
:::b)	No part is available until 150.  An on-condition restock is triggered for 181.&lt;br /&gt;
:::c)	Pool [on-hand = 0, pending: 150 &amp;lt;math&amp;gt;_{s}\,\!&amp;lt;/math&amp;gt;, 181, 181].&lt;br /&gt;
:::d)	At 150, a part becomes available and the PM is completed by 160.&lt;br /&gt;
:::e)	Pool [on-hand = 0, pending: 181, 181].&lt;br /&gt;
::4.	At 161, block &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails (corrective maintenance upon failure).&lt;br /&gt;
:::a)	Block &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; gets Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, which arrives at 171.&lt;br /&gt;
:::b)	No part is available until 181.  An on-condition restock is triggered for 221.&lt;br /&gt;
:::c)	Pool [on-hand = 0, pending: 181, 181, 221].&lt;br /&gt;
:::d)	A part arrives at 181.&lt;br /&gt;
:::e)	The repair is completed by 201.&lt;br /&gt;
:::f)	Pool [on-hand = 0, pending: 181, 221].&lt;br /&gt;
::5.	At 162, block &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;  fails.&lt;br /&gt;
:::a)	Block &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; gets Crew &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, which arrives at 177.&lt;br /&gt;
:::b)	No part is available until 181.  An on-condition restock is triggered for 222.&lt;br /&gt;
:::c)	Pool [on-hand = 0, pending: 181, 221, 222].&lt;br /&gt;
:::d)	A part arrives at 181.&lt;br /&gt;
:::e)	The repair is completed by 201.&lt;br /&gt;
:::f)	Pool [on-hand = 0, pending: 221, 222].  &lt;br /&gt;
::6.	At 163, block &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; fails and brings the system down.&lt;br /&gt;
:::a)	Block &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; calls Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;.  Both are busy.&lt;br /&gt;
:::b)	Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; will be the first available so  ..  calls &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; again and waits.&lt;br /&gt;
:::c)	No part is available until 221.  An on-condition restock is triggered for 223.&lt;br /&gt;
:::d)	Pool [on-hand = 0, pending: 221, 222, 223].&lt;br /&gt;
:::e)	Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; arrives at 211.&lt;br /&gt;
:::f)	Repair begins at 221.&lt;br /&gt;
:::g)	Repair is completed by 241.&lt;br /&gt;
:::h)	Pool [on-hand = 0, pending: 222, 223].  &lt;br /&gt;
::7.	At 298, block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; goes down and brings the system down.&lt;br /&gt;
&lt;br /&gt;
====System Uptimes/Downtimes====&lt;br /&gt;
::1.  Uptime: This is 200 hours.  &lt;br /&gt;
:::a)	This can be obtained by observing the following system up durations: 0 to 100, 160 to 163 and 201 to 298.&lt;br /&gt;
::2.	CM Downtime: This is 58 hours.&lt;br /&gt;
:::a)	Observe that even though the system failed at 100, the CM action (on block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; ) was initiated at 121 and lasted until 141, thus only 20 hours of this downtime are attributed to the CM action.&lt;br /&gt;
:::b)	The next CM action started at 163 when block &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; failed and lasted until 201 when blocks &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; were restored, thus adding another 38 hours of CM downtime.&lt;br /&gt;
::3.	Inspection Downtime: This is 1 hour. &lt;br /&gt;
:::a)	The only time the system was under inspection was from 120 to 121, during the inspection of block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
::4.	PM Downtime: This is 19 hours.  &lt;br /&gt;
:::a)	Note that the entire PM action duration on block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; was from 121 to 160.&lt;br /&gt;
:::b)	Until 141, and from the system perspective, the CM on block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was the cause for the downing.  Once block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was restored (at 141), then the reason for the system being down became the PM on block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:::c)	Thus, the PM on block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; was only responsible for the downtime after block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was restored, or from 141 to 160.&lt;br /&gt;
::5.	OC Downtime: This is 0. There is not on condition task in this example. &lt;br /&gt;
::6.	Total Downtime:  This is 100 hours. &lt;br /&gt;
:::a)	This includes all of the above downtimes plus the 20 hours (100 to 120) and the 2 hours (298 to 300) that the system was down due the undiscovered failure of block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Image:R32.png|center|600px|link=]]&lt;br /&gt;
&lt;br /&gt;
====System Metrics====&lt;br /&gt;
::1.	Mean Availability (All Events): &lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{300-100}{300}=0.6667\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
::2.	Mean Availability (w/o PM &amp;amp; Inspection):&lt;br /&gt;
:::a)	This is due to the CM downtime of 58, the undiscovered downtime of 22 and the inspection downtime of 1, or: &lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{300-(58+22+1)}{300}=0.7333\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
:::b)	It should be noted that the inspection downtime was included even though the definition was &amp;quot;w/o PM &amp;amp; Inspection.&amp;quot;  The reason for this is that the inspection did not cause the downtime in this case.  Only downtimes caused by the PM or inspections are excluded.  &lt;br /&gt;
::3. Point Availability and Reliability at 300 is zero because the system was down at 300.&lt;br /&gt;
::4.	Expected Number of Failures is 3.  &lt;br /&gt;
:::a)	The system failed at 100, 163 and 298.&lt;br /&gt;
::5.	The standard deviation of the number of failures is 0.&lt;br /&gt;
::6.	The MTTFF is 100 because the example is deterministic.&lt;br /&gt;
&lt;br /&gt;
====The System Downing Events====&lt;br /&gt;
::1.	Number of Failures is 3.&lt;br /&gt;
:::a)	The first is the failure of block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, the second is the failure of block &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; and the third is the failure of block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
::2.	Number of CMs is 2.  &lt;br /&gt;
:::a)	The first is the CM on block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and the second is the CM on block &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
::3.	Number of Inspections is 1.&lt;br /&gt;
::4.	Number of PMs is 1.&lt;br /&gt;
::5.	Total Events are 6.  These are events that the downtime can be attributed to.  Specifically, the following events were observed:&lt;br /&gt;
:::a)	The failure of block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at 100.  &lt;br /&gt;
:::b)	Inspection on block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at 120.&lt;br /&gt;
:::c)	The CM action on block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:::d)	The PM action on block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; (after &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; was fixed).&lt;br /&gt;
:::e)	The failure of block &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; at 163.&lt;br /&gt;
:::f)	The failure of block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at 298.&lt;br /&gt;
&lt;br /&gt;
====Block Details====&lt;br /&gt;
The details for blocks &amp;lt;math&amp;gt;A,B,C,D\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; are shown below.&lt;br /&gt;
&lt;br /&gt;
[[Image:r33.png|center|600px| Block details for this example.|link=]]&lt;br /&gt;
&lt;br /&gt;
We will discuss some of these results.  First note that there are four downing events on block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; : initial failure, inspection and CM, plus the last failure at 298.  All others have just one.  Also, block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; had a total downtime of &amp;lt;math&amp;gt;41+2\,\!&amp;lt;/math&amp;gt;, giving it a mean availability of 0.8567.  The first time-to-failure for block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; occurred at 100 while the second occurred after &amp;lt;math&amp;gt;298-141=157\,\!&amp;lt;/math&amp;gt; hours of operation, yielding an average time between failures (MTBF) of &amp;lt;math&amp;gt;257/2=128.5\,\!&amp;lt;/math&amp;gt;. (Note that this is the same as uptime/failures.)  Block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; never failed, so its MTBF cannot be determined.  Furthermore,  MTBDE for each item is determined by dividing the block&#039;s uptime by the number of events.  The RS FCI and RS DECI metrics are obtained by looking at the SD Failures and SD Events of the item and the number of system failures and events.  Specifically, the only items that caused system failure are blocks &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; ; &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at 100 and 298 and &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; at 163.  It is important to note that even though one could argue that block &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; alone did not cause the failure ( &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; were also failed), the downing was attributed to &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; because the system reached a failed state only when block &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; failed.  &lt;br /&gt;
&lt;br /&gt;
On the number of inspections, which were scheduled every 30 hours, nine occurred for block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; [30, 60, 90, 120, 150, 180, 210, 240, 270] and eight for block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt;.  Block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; did not get inspected at 150 because block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; was undergoing a PM action at that time.&lt;br /&gt;
&lt;br /&gt;
====Crew Details====&lt;br /&gt;
&lt;br /&gt;
The figure below shows the crew results.&lt;br /&gt;
&lt;br /&gt;
[[Image:r34.png|center|400px| Crew details for this example.|link=]]&lt;br /&gt;
&lt;br /&gt;
Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; received a total of six calls and accepted three.  Specifically,&lt;br /&gt;
&lt;br /&gt;
::#At 121, the crew was called by block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and the call was accepted.&lt;br /&gt;
::#At 121, block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; also called for its PM action and was rejected.  Block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; then called crew &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, which accepted the call.&lt;br /&gt;
::#At 161, block &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; called crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.  Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; accepted.&lt;br /&gt;
::#At 162, block &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; called crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.  Crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; rejected and block &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; called crew &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, which accepted the call.&lt;br /&gt;
::#At 163, block &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; called crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and then crew &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and both rejected.  Block &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; then waited until a crew became available at 201 and called that crew again.  This was crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, which accepted.&lt;br /&gt;
&lt;br /&gt;
The total wait time is the time that blocks had to wait for the maintenance crew.  Block &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; is the only component that waited, waiting 38 hours for crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Also, the costs for crew &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; were 1 per unit time and 10 per incident, thus the total costs were 100 + 30.  The costs for Crew &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; were 2 per unit time and 20 per incident, thus the total costs were 156 + 40.&lt;br /&gt;
&lt;br /&gt;
====Pool Details====&lt;br /&gt;
The figure below shows the spare part pool results.&lt;br /&gt;
&lt;br /&gt;
[[Image:r35.png|center|300px| Pool details for this example.|link=]]&lt;br /&gt;
&lt;br /&gt;
The pool started with a stock level of 1 and ended up with 2.  Specifically,&lt;br /&gt;
&lt;br /&gt;
::#At 121, the pool dispensed a part to block &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and ordered another to arrive at 181.&lt;br /&gt;
::#At 121, it dispensed a part to block &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; and ordered another to arrive at 181.&lt;br /&gt;
::#At 150, a scheduled part arrived to restock the pool.&lt;br /&gt;
::#At 161 the pool dispensed a part to block &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; and ordered another to arrive at 221.&lt;br /&gt;
::#At 181, it dispensed a part to block &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and ordered another to arrive at 222.&lt;br /&gt;
::#At 221, it dispensed a part to block &amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; and ordered another to arrive at 223.&lt;br /&gt;
::#The 222 and 223 arrivals remained in stock until the end of the simulation.&lt;br /&gt;
&lt;br /&gt;
Overall, five parts were dispensed. Blocks had to wait a total of 126 hours to receive parts (B: 181-161=20, C: 181-162=19, D: 150-121=29  and  F: 221-163=58).&lt;br /&gt;
&lt;br /&gt;
=Subdiagrams and Multi Blocks in Simulation=&lt;br /&gt;
&lt;br /&gt;
Any subdiagrams and multi blocks that may be present in the BlockSim RBD are expanded and/or merged into a single diagram before the system is simulated.  As an example, consider the system shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:r38.png|center|350px| A system made up of three subsystems, A, B, and C.|link=]]&lt;br /&gt;
&lt;br /&gt;
BlockSim will internally merge the system into a single diagram before the simulation, as shown in the figure below.  This means that all the failure and repair properties of the items in the subdiagrams are also considered.&lt;br /&gt;
&lt;br /&gt;
[[Image:r39.png|center|350px| The simulation engine view of the system and subdiagrams|link=]]&lt;br /&gt;
 &lt;br /&gt;
In the case of multi blocks, the blocks are also fully expanded before simulation.  This means that unlike the analytical solution, the execution speed (and memory requirements) for a multi block representing ten blocks in series is identical to the representation of ten individual blocks in series.&lt;br /&gt;
&lt;br /&gt;
=Containers in Simulation=&lt;br /&gt;
===Standby Containers===&lt;br /&gt;
When you simulate a diagram that contains a standby container, the container acts as the switch mechanism (as shown below) in addition to defining the standby relationships and the number of active units that are required. The container&#039;s failure and repair properties are really that of the switch itself. The switch can fail with a distribution, while waiting to switch or during the switch action. Repair properties restore the switch regardless of how the switch failed. Failure of the switch itself does not bring the container down because the switch is not really needed unless called upon to switch. The container will go down if the units within the container fail or the switch is failed when a switch action is needed. The restoration time for this is based on the repair distributions of the contained units and the switch. Furthermore, the container is down during a switch process that has a delay.  &lt;br /&gt;
&lt;br /&gt;
[[Image:8.43.png|center|500px| The standby container acts as the switch, thus the failure distribution of the container is the failure distribution of the switch. The container can also fail when called upon to switch.|link=]]&lt;br /&gt;
&lt;br /&gt;
[[Image:8_43_1_new.png|center|150px|link=]]&lt;br /&gt;
&lt;br /&gt;
To better illustrate this, consider the following deterministic case.&lt;br /&gt;
&lt;br /&gt;
::#Units &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; are contained in a standby container.&lt;br /&gt;
::#The standby container is the only item in the diagram, thus failure of the container is the same as failure of the system.  &lt;br /&gt;
::#&amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is the active unit and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is the standby unit.  &lt;br /&gt;
::#Unit &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; fails every 100 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; (active) and takes 10 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; to repair.  &lt;br /&gt;
::#&amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails every 3 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; (active) and also takes 10 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; to repair.  &lt;br /&gt;
::#The units cannot fail while in quiescent (standby) mode.  &lt;br /&gt;
::#Furthermore, assume that the container (acting as the switch) fails every 30 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; while waiting to switch and takes 4 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; to repair. If not failed, the container switches with 100% probability.  &lt;br /&gt;
::#The switch action takes 7 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; to complete.&lt;br /&gt;
::#After repair, unit &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is always reactivated.  &lt;br /&gt;
::#The container does not operate through system failure and thus the components do not either.  &lt;br /&gt;
&lt;br /&gt;
Keep in mind that we are looking at two events on the container.  The container down and container switch down.&lt;br /&gt;
&lt;br /&gt;
The system event log is shown in the figure below and is as follows:&lt;br /&gt;
&lt;br /&gt;
[[Image:BS8.44.png|center|600px| The system behavior using a standby container.|link=]]&lt;br /&gt;
&lt;br /&gt;
::#At 30, the switch fails and gets repaired by 34.  The container switch is failed and being repaired; however, the container is up during this time.&lt;br /&gt;
::#At 64, the switch fails and gets repaired by 68.  The container is up during this time.&lt;br /&gt;
::#At 98, the switch fails.  It will be repaired by 102.&lt;br /&gt;
::#At 100, unit &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; fails.  Unit &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; attempts to activate the switch to go to &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; ; however, the switch is failed.&lt;br /&gt;
::#At 102, the switch is operational.&lt;br /&gt;
::#From 102 to 109, the switch is in the process of switching from unit &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; to unit &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;.  The container and system are down from 100 to 109.&lt;br /&gt;
::#By 110, unit &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is fixed and the system is switched back to &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;.  The return switch action brings the container down for 7 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;, from 110 to 117.  During this time, note that unit &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has only functioned for 1 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;, 109 to 110.&lt;br /&gt;
::#At 146, the switch fails and gets repaired by 150.  The container is up during this time.&lt;br /&gt;
::#At 180, the switch fails and gets repaired by 184.  The container is up during this time.&lt;br /&gt;
::#At 214, the switch fails and gets repaired by 218.  &lt;br /&gt;
::#At 217, unit &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; fails.  The switch is failed at this time.&lt;br /&gt;
::#At 218, the switch is operational and the system is switched to unit &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; within 7 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;.  The container is down from 218 to 225.&lt;br /&gt;
::#At 225, unit &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; takes over.  After 2 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; of operation at 227, unit &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails.  It will be restored by 237.  &lt;br /&gt;
::#At 227, unit &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is repaired and the switchback action to unit &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is initiated.  By 234, the system is up.&lt;br /&gt;
::#At 262, the switch fails and gets repaired by 266.  The container is up during this time.&lt;br /&gt;
::#At 296, the switch fails and gets repaired by 300.  The container is up during this time.&lt;br /&gt;
&lt;br /&gt;
The system results are shown in the figure below and discussed next.&lt;br /&gt;
[[Image:BS8.45.png|center|600px| System overview results.|link=]]&lt;br /&gt;
&lt;br /&gt;
::1.	System CM Downtime is 24.  &lt;br /&gt;
:::a)	CM downtime includes all downtime due to failures as well as the delay in switching from a failed active unit to a standby unit.  It does not include the switchback time from the standby to the restored active unit.  Thus, the times from 100 to 109, 217 to 225 and 227 to 234 are included.  The time to switchback, 110 to 117, is not included.&lt;br /&gt;
::2.	System Total Downtime is 31.  &lt;br /&gt;
:::a)	It includes the CM downtime and the switchback downtime.&lt;br /&gt;
::3.	Number of System Failures is 3.  &lt;br /&gt;
:::a)	It includes the failures at 100, 217 and 227.  &lt;br /&gt;
:::b)	This is the same as the number of CM downing events.  &lt;br /&gt;
::4.	The Total Downing Events are 4.  &lt;br /&gt;
:::a)	This includes the switchback downing event at 110.&lt;br /&gt;
::5.	The Mean Availability (w/o PM and Inspection) does not include the downtime due to the switchback event.&lt;br /&gt;
&lt;br /&gt;
====Additional Rules and Assumptions for Standby Containers====&lt;br /&gt;
&lt;br /&gt;
::1)	A container will only attempt to switch if there is an available non-failed item to switch to.  If there is no such item, it will then switch if and when an item becomes available. The switch will cancel the action if it gets restored before an item becomes available.  &lt;br /&gt;
:::a)	As an example, consider the case of unit &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; failing active while unit &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; failed in a quiescent mode.  If unit &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; gets restored before unit &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, then the switch will be initiated.  If unit &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is restored before unit &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, the switch action will not occur.&lt;br /&gt;
::2)	In cases where not all active units are required, a switch will only occur if the failed combination causes the container to fail.  &lt;br /&gt;
:::a)	For example, if &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; are in a container for which one unit is required to be operating and &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; are active with &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; on standby, then the failure of either &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; will not cause a switching action. The container will switch to &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; only if both &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; are failed.&lt;br /&gt;
::3)	If the container switch is failed and a switching action is required, the switching action will occur after the switch has been restored if it is still required (i.e., if the active unit is still failed).&lt;br /&gt;
::4)	If a switch fails during the delay time of the switching action based on the reliability distribution (quiescent failure mode), the action is still carried out unless a failure based on the switch probability/restarts occurs when attempting to switch.  &lt;br /&gt;
::5)	During switching events, the change from the operating to quiescent distribution (and vice versa) occurs at the end of the delay time.&lt;br /&gt;
::6)	The option of whether components operate while the system is down is defined at component level now (This is different from BlockSim 7, in which this option of the contained items inherit from container). Two rules here:&lt;br /&gt;
:::a)	If a path inside the container is down, blocks inside the container that are in that path do not continue to operate.&lt;br /&gt;
:::b)	Blocks that are up do not continue to operate while the container is down.&lt;br /&gt;
::7)	A switch can have a repair distribution and maintenance properties without having a reliability distribution.  &lt;br /&gt;
:::a)	This is because maintenance actions are performed regardless of whether the switch failed while waiting to switch (reliability distribution) or during the actual switching process (fixed probability).&lt;br /&gt;
::8)	A switch fails during switching when the restarts are exhausted.&lt;br /&gt;
::9)	A restart is executed every time the switch fails to switch (based on its fixed probability of switching).&lt;br /&gt;
::10)	If a delay is specified, restarts happen after the delay.&lt;br /&gt;
::11)	If a container brings the system down, the container is responsible for the system going down (not the blocks inside the container).&lt;br /&gt;
&lt;br /&gt;
===Load Sharing Containers===&lt;br /&gt;
&lt;br /&gt;
When you simulate a diagram that contains a load sharing container, the container defines the load that is shared. A load sharing container has no failure or repair distributions. The container itself is considered failed if all the blocks inside the container have failed (or &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; blocks in a &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; -out-of- &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; configuration).&lt;br /&gt;
&lt;br /&gt;
To illustrate this, consider the following container with items &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; in a load sharing redundancy.&lt;br /&gt;
&lt;br /&gt;
Assume that &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; fails every 100 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; every 120 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; if both items are operating and they fail in half that time if either is operating alone (i.e., the items age twice as fast when operating alone).  They both get repaired in 5 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Image:8.46.png|center|600px| Behavior of a simple load sharing system.|link=]]&lt;br /&gt;
&lt;br /&gt;
The system event log is shown in the figure above and is as follows:&lt;br /&gt;
&lt;br /&gt;
::1.	At 100, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; fails.  It takes 5 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; to restore &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
::2.	From 100 to 105, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is operating alone and is experiencing a higher load.&lt;br /&gt;
::3.	At 115, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails.    would normally be expected to fail at 120, however:  &lt;br /&gt;
:::a)	From 0 to 100, it accumulated the equivalent of 100 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; of damage.&lt;br /&gt;
:::b)	From 100 to 105, it accumulated 10 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; of damage, which is twice the damage since it was operating alone.  Put another way, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; aged by 10 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; over a period of 5 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:::c)	At 105, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is restored but &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; has only 10 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; of life remaining at this point.&lt;br /&gt;
:::d)	 &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails at 115.&lt;br /&gt;
::4.	At 120, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; is repaired.&lt;br /&gt;
::5.	At 200, &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; fails again.  &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; would normally be expected to fail at 205; however, the failure of &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; at 115 to 120 added additional damage to &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.  In other words, the age of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at 115 was 10; by 120 it was 20.  Thus it reached an age of 100 95 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; later at 200.  &lt;br /&gt;
::6.	 &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is restored by 205.&lt;br /&gt;
::7.	At 235, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails.  &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; would normally be expected to fail at 240; however, the failure of &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; at 200 caused the reduction.&lt;br /&gt;
:::a)	At 200, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; had an age of 80.&lt;br /&gt;
:::b)	By 205, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; had an age of 90.&lt;br /&gt;
:::c)	 &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; fails 30 &amp;lt;math&amp;gt;tu\,\!&amp;lt;/math&amp;gt; later at 235.&lt;br /&gt;
::8.	The system itself never failed.&lt;br /&gt;
&lt;br /&gt;
====Additional Rules and Assumptions for Load Sharing Containers====&lt;br /&gt;
&lt;br /&gt;
::1.	The option of whether components operate while the system is down is defined at component level now (This is different from BlockSim 7, in which this option of the contained items inherit from container). Two rules here:&lt;br /&gt;
:::a)	If a path inside the container is down, blocks inside the container that are in that path do not continue to operate.&lt;br /&gt;
:::b)	Blocks that are up do not continue to operate while the container is down.&lt;br /&gt;
::2.	If a container brings the system down, the block that brought the container down is responsible for the system going down.  (This is the opposite of standby containers.)&lt;br /&gt;
&lt;br /&gt;
=State Change Triggers=&lt;br /&gt;
{{:State Change Triggers}}&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
&lt;br /&gt;
Even though the examples and explanations presented here are deterministic, the sequence of events and logic used to view the system is the same as the one that would be used during simulation.  The difference is that the process would be repeated multiple times during simulation and the results presented would be the average results over the multiple runs.&lt;br /&gt;
&lt;br /&gt;
Additionally, multiple metrics and results are presented and defined in this chapter.  Many of these results can also be used to obtain additional metrics not explicitly given in BlockSim&#039;s Simulation Results Explorer.  As an example, to compute mean availability with inspections but without PMs, the explicit downtimes given for each event could be used.  Furthermore, all of the results given are for operating times starting at zero to a specified end time (although the components themselves could have been defined with a non-zero starting age).  Results for a starting time other than zero could be obtained by running two simulations and looking at the difference in the detailed results where applicable.   As an example, the difference in uptimes and downtimes can be used to determine availabilities for a specific time window.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=User:Sharon_Honecker/BasicStatBackgroundv11&amp;diff=64911</id>
		<title>User:Sharon Honecker/BasicStatBackgroundv11</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=User:Sharon_Honecker/BasicStatBackgroundv11&amp;diff=64911"/>
		<updated>2017-02-06T21:53:09Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Mean Remaining Life */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Conditional Reliability Function===&lt;br /&gt;
Conditional reliability is the probability of successfully completing another mission following the successful completion of a previous mission. The time of the previous mission and the time for the mission to be undertaken must be taken into account for conditional reliability calculations. The conditional reliability function is given by:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::&amp;lt;math&amp;gt;R(T,t)=\frac{R(T+t)}{R(T)}\ \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mean Life (MTTF)===&lt;br /&gt;
&lt;br /&gt;
The mean life function, which provides a measure of the average time of operation to failure of a new component, is given by: &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::&amp;lt;math&amp;gt;\overline{T}=m=\int_{0}^{\infty} t\cdot f(t)dt=\int_{0}^{\infty} R(t)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the expected or average time-to-failure and is denoted as the MTTF (Mean Time To Failure).  &lt;br /&gt;
&lt;br /&gt;
The MTTF, even though an index of reliability performance, does not give any information on the failure distribution of the component in question when dealing with most lifetime distributions. Because vastly different distributions can have identical means, it is unwise to use the MTTF as the sole measure of the reliability of a component.&lt;br /&gt;
&lt;br /&gt;
===Mean Remaining Life===&lt;br /&gt;
&lt;br /&gt;
The mean remaining life function, which provides a measure of the average time of operation to failure of a component following the successful completion of a previous mission, is given by:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
::&amp;lt;math&amp;gt;L(T)=\int_{0}^{\infty} R(T,t)dt= \frac{\int_{0}^{\infty} R(T+t)dt}{R(T)}\ \,\!&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Appendix_B:_Parameter_Estimation&amp;diff=64909</id>
		<title>Appendix B: Parameter Estimation</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Appendix_B:_Parameter_Estimation&amp;diff=64909"/>
		<updated>2017-02-06T16:53:15Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Example of Graphical Method for Accelerated Life Data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK SUB|Appendix B|Parameter Estimation}}&lt;br /&gt;
 &lt;br /&gt;
This appendix presents two methods for estimating the parameters of accelerated life test data analysis models (ALTA models). The graphical method, which is based on probability plotting or least squares (Rank Regression on X or Rank Regression on Y), has some limitations. Therefore, the Maximum Likelihood Estimation (MLE) method is used for all parameter estimation in ALTA. &lt;br /&gt;
&lt;br /&gt;
=Graphical Method=&lt;br /&gt;
The graphical method for estimating the parameters of accelerated life data involves generating two types of plots. First, the life data at each individual stress level are plotted on a probability paper appropriate to the assumed life distribution (i.e., Weibull, exponential, or lognormal). This can be done using either [[Parameter_Estimation#Probability_Plotting|Probability Plotting]] or [[Parameter_Estimation#Least_Squares_Parameter_Estimation|Least Squares (Rank Regression)]]. &lt;br /&gt;
&lt;br /&gt;
The parameters of the distribution at each stress level are then estimated from the plot. Once these parameters have been estimated at each stress level, the second plot is created on a paper that linearizes the assumed life-stress relationship (e.g., Arrhenius, inverse power law, etc.). To do this, a life characteristic must be chosen to be plotted. The life characteristic can be any percentile, such as BX% life, the scale parameter, mean life, etc. The plotting paper used is a special type of paper that linearizes the life-stress relationship. For example, a log-log paper linearizes the inverse power law relationship, and a log-reciprocal paper linearizes the Arrhenius relationship. The parameters of the model are then estimated by solving for the slope and the intercept of the line. &lt;br /&gt;
&lt;br /&gt;
[[Image:ALTAB.1.png|center|600px]]&lt;br /&gt;
[[Image:ALTAB.1.1.png|center|600px]]&lt;br /&gt;
&lt;br /&gt;
==Example of Graphical Method for Accelerated Life Data==&lt;br /&gt;
Consider the following times-to-failure data at three different stress levels.&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|Stress||393 psi||	408 psi||	423 psi&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;10&amp;quot; style=&amp;quot;text-align:center&amp;quot;|Time Failed (hrs)||   3450||     3300||   2645&lt;br /&gt;
|-&lt;br /&gt;
|4340|| 3720||  3100&lt;br /&gt;
|-&lt;br /&gt;
|4760 ||4180 ||3400&lt;br /&gt;
|-&lt;br /&gt;
|5320||4560||3800 &lt;br /&gt;
|-&lt;br /&gt;
| 5740||4920 || 4100&lt;br /&gt;
|-&lt;br /&gt;
|6160||5280|| 4400&lt;br /&gt;
|-&lt;br /&gt;
|6580||5640||4700&lt;br /&gt;
|-&lt;br /&gt;
|7140 ||  6233 || 5100&lt;br /&gt;
|-&lt;br /&gt;
|8101||6840||5700&lt;br /&gt;
|-&lt;br /&gt;
|8960 || 7380 ||6400&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Estimate the parameters for a Weibull assumed life distribution and for the inverse power law life-stress relationship.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First the parameters of the Weibull distribution need to be determined. The data are individually analyzed (for each stress level) using the probability plotting method, or software such as ReliaSoft&#039;s Weibull++, with the following results:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; [{{\widehat{\beta }}_{1}}= &amp;amp; 3.8,\text{ }{{\widehat{\eta }}_{1}}=6692] \\ &lt;br /&gt;
 &amp;amp; \text{ }\!\![\!\!\text{ }{{\widehat{\beta }}_{2}}= &amp;amp; 4.2,\text{ }{{\widehat{\eta }}_{2}}=5716] \\ &lt;br /&gt;
 &amp;amp; [{{\widehat{\beta }}_{3}}= &amp;amp; 4.0,\text{ }{{\widehat{\eta }}_{3}}=4774]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\widehat{\beta }}_{1}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{\widehat{\eta }}_{1}}\,\!&amp;lt;/math&amp;gt; are the parameters of the 393 psi data.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\widehat{\beta }}_{2}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{\widehat{\eta }}_{2}}\,\!&amp;lt;/math&amp;gt; are the parameters of the 408 psi data.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\widehat{\beta }}_{3}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{\widehat{\eta }}_{3}}\,\!&amp;lt;/math&amp;gt; are the parameters of the 423 psi data.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTAProbabilityplot.png|center|600px|]]&lt;br /&gt;
&lt;br /&gt;
Since the shape parameter, &amp;lt;math&amp;gt;\beta ,\,\!&amp;lt;/math&amp;gt; is not common for the three stress levels, the average value is estimated. &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\widehat{\beta }}_{common}}=4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Averaging the  betas  is one of many simple approaches available. One can also use a weighted average, since the uncertainty on beta is greater for smaller sample sizes. In most practical applications the value of &amp;lt;math&amp;gt;\widehat{\beta }\,\!&amp;lt;/math&amp;gt; will vary (even though it is assumed constant) due to sampling error, etc. The variability in the value of &amp;lt;math&amp;gt;\widehat{\beta }\,\!&amp;lt;/math&amp;gt; is a source of error when performing analysis by averaging the  betas. MLE analysis, which uses a common &amp;lt;math&amp;gt;\widehat{\beta }\,\!&amp;lt;/math&amp;gt;, is not susceptible to this error. MLE analysis is the method of parameter estimation used in ALTA and it is explained in the next section.&lt;br /&gt;
&lt;br /&gt;
Redraw each line with a &amp;lt;math&amp;gt;\widehat{\beta }=4\,\!&amp;lt;/math&amp;gt;,  and estimate the new  etas,  as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{\widehat{\eta }}_{1}}= &amp;amp; 6650 \\ &lt;br /&gt;
 &amp;amp; {{\widehat{\eta }}_{2}}= &amp;amp; 5745 \\ &lt;br /&gt;
 &amp;amp; {{\widehat{\eta }}_{3}}= &amp;amp; 4774  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTAProbabilityplot2.png|center|600px|]]&lt;br /&gt;
&lt;br /&gt;
The IPL relationship is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=\frac{1}{K{{V}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; represents a quantifiable life measure (eta  in the Weibull case), &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; represents the stress level, &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; is one of the parameters, and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is another model parameter. The relationship is linearized by taking the logarithm of both sides which yields: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\ln (L)=-\ln K-n\ln V &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;L=\eta \,\!&amp;lt;/math&amp;gt;, (&amp;lt;math&amp;gt;-\ln K)\,\!&amp;lt;/math&amp;gt; is the intercept, and (&amp;lt;math&amp;gt;-n)\,\!&amp;lt;/math&amp;gt; is the slope of the line.&lt;br /&gt;
&lt;br /&gt;
The values of eta obtained previously are now plotted on a log-linear scale yielding the following plot:&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTAlifevsstress.png|center|600px|]]&lt;br /&gt;
&lt;br /&gt;
The slope of the line is the &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; parameter, which is obtained from the plot:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; Slope=\ \frac{\ln ({{T}_{2}})-\ln ({{T}_{1}})}{\ln ({{V}_{2}})-\ln ({{V}_{1}})} =\ \frac{\ln (10,000)-\ln (6,000)}{\ln (360)-\ln (403)} =\ -4.5272  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{n}=4.5272\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Solving the inverse power law equation with respect to &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; yields: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\widehat{K}=\frac{1}{L{{V}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Substituting V=403, the corresponding L (from the plot), L=6,000 and the previously estimated &amp;lt;math&amp;gt;n\ \ :\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \widehat{K}=\ \frac{1}{6000\cdot{{403}^{4.5272}}} =\ 2.67\cdot {{10}^{-16}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Comments on the Graphical Method==&lt;br /&gt;
Although the graphical method is simple, it is quite laborious. Furthermore, many issues surrounding its use require careful consideration. Some of these issues are presented next:&lt;br /&gt;
&lt;br /&gt;
*What happens when no failures are observed at one or more stress level? In this case, plotting methods cannot be employed. Discarding the data would be a mistake since every piece of life data information is important.  &lt;br /&gt;
&lt;br /&gt;
*In the step at which the life-stress relationship is linearized and plotted to obtain its parameters, you must be able to linearize the function, which is not always possible.&lt;br /&gt;
&lt;br /&gt;
*In real accelerated tests the data sets are small. Separating them and individually plotting them, and then subsequently replotting the results, increases the underlying error.&lt;br /&gt;
&lt;br /&gt;
*During initial parameter estimation, the parameter that is assumed constant will more than likely vary. What value do you use?&lt;br /&gt;
&lt;br /&gt;
*Confidence intervals on all of the results cannot be ascertained using graphical methods.&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimation parameter estimation method described next overcomes these shortfalls, and is the method utilized in ALTA.&lt;br /&gt;
&lt;br /&gt;
=Maximum Likelihood Estimation (MLE) Method=&lt;br /&gt;
The idea behind maximum likelihood parameter estimation is to determine the parameters that maximize the probability (likelihood) of the sample data. From a statistical point of view, the method of maximum likelihood is considered to be more robust (with some exceptions) and yields estimators with good statistical properties. In other words, MLE methods are versatile and apply to most models and to different types of data. In addition, they provide efficient methods for quantifying uncertainty through confidence bounds. For a detailed discussion of this analysis method for a single life distribution, see [[Parameter_Estimation#Maximum_Likelihood_Parameter_Estimation|Maximum Likelihood Estimation]]. &lt;br /&gt;
&lt;br /&gt;
The maximum likelihood solution for accelerated life test data is formulated in the same way as described in [[Parameter_Estimation#Maximum_Likelihood_Parameter_Estimation|Maximum Likelihood Estimation]] for a single life distribution. However, in this case, the stress level of each individual observation is included in the likelihood function. Consider a continuous random variable &amp;lt;math&amp;gt;x(v),\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; is the stress. The  &#039;&#039;pdf&#039;&#039;  of the random variable now becomes a function of both &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
f(x,v;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}\,\!&amp;lt;/math&amp;gt; are &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; unknown constant parameters which need to be estimated. Conduct an experiment and obtain &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; independent observations, &amp;lt;math&amp;gt;{{x}_{1}},{{x}_{2}},...,{{x}_{N}}\,\!&amp;lt;/math&amp;gt; each at a corresponding stress, &amp;lt;math&amp;gt;{{v}_{1}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{v}_{2}},...,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{v}_{N}}\,\!&amp;lt;/math&amp;gt;. Then the likelihood function for complete data is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(({{x}_{1}},\text{ }{{v}_{1}}),({{x}_{2}},\text{ }{{v}_{2}}),...,({{x}_{N}},\text{ }{{v}_{N}})|{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})=\underset{i=1}{\overset{N}{\mathop \prod }}\,f({{x}_{i}},{{v}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
i=1,2,...,N &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logarithmic likelihood function is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda =\ln L=\underset{i=1}{\overset{N}{\mathop \sum }}\,\ln f({{x}_{i}},{{v}_{i}};{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood estimators (MLE) of &amp;lt;math&amp;gt;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}},\,\!&amp;lt;/math&amp;gt; are obtained by maximizing &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\Lambda .\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;{{\theta }_{1}},{{\theta }_{2}},...,{{\theta }_{k}}\,\!&amp;lt;/math&amp;gt; are the parameters of the combined model which includes the parameters of the life distribution and the parameters of the life-stress relationship. Note that in the above equations, &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; is the total number of observations. This means that the sample size is no longer broken into the number of observations at each stress level. In the graphical method example, the sample size at the stress level of 20V was 4, and 15 at 36V. Using the above equations, however, the test&#039;s sample size is 19.&lt;br /&gt;
&lt;br /&gt;
Once the parameters are estimated, they can be substituted back into the life distribution and the life-stress relationship.&lt;br /&gt;
&lt;br /&gt;
==Example of MLE for Accelerated Life Data==&lt;br /&gt;
The following example illustrates the use of the MLE method on accelerated life test data. Consider the inverse power law relationship, given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=\frac{1}{K{{V}^{n}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; represents a quantifiable life measure, &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; represents the stress level, &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; is one of the parameters, and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is another model parameter.&lt;br /&gt;
&lt;br /&gt;
Assume that the life at each stress follows a Weibull distribution, with a  &#039;&#039;pdf&#039;&#039;  given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t)=\frac{\beta }{\eta }{{\left( \frac{T}{\eta } \right)}^{\beta -1}}{{e}^{-{{\left( \tfrac{T}{\eta } \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the time-to-failure, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, is a function of stress, &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A common life measure needs to determined so that it can be easily included in the Weibull &#039;&#039;pdf&#039;&#039;. In this case, setting &amp;lt;math&amp;gt;\eta =L(V)\,\!&amp;lt;/math&amp;gt; (which is the life at 63.2%) and substituting in the Weibull &#039;&#039;pdf&#039;&#039;, yields the following IPL-Weibull  &#039;&#039;pdf&#039;&#039; :&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,V)=\beta K{{V}^{n}}{{\left( K{{V}^{n}}T \right)}^{\beta -1}}{{e}^{-{{\left( K{{V}^{n}}T \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The log-likelihood function for the complete data is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\Lambda =\ln L=\sum\limits_{i=1}^{N}{\ln \left( \beta K{{V}^{n}}{{\left( K{{V}^{n}}{{T}_{i}} \right)}^{\beta -1}}{{e}^{-{{\left( K{{V}^{n}}{{T}_{i}} \right)}^{\beta }}}} \right)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is now the common shape parameter to solve for, along with &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusions=&lt;br /&gt;
&lt;br /&gt;
In this appendix, two methods for estimating the parameters of accelerated life testing models were presented. First, the graphical method was illustrated using a probability plotting method for obtaining the parameters of the life distribution. The parameters of the life-stress relationship were then estimated graphically by linearizing the model. However, not all life-stress relationships can be linearized. In addition, estimating the parameters of each individual distribution leads to an accumulation of uncertainties, depending on the number of failures and suspensions observed at each stress level. Furthermore, the slopes (shape parameters) of each individual distribution are rarely equal (common). Using the graphical method, one must estimate a common shape parameter (usually the average) and repeat the analysis. By doing so, further uncertainties are introduced on the estimates, and these are uncertainties that cannot be quantified. The second method, the Maximum Likelihood Estimation, treated both the life distribution and the life-stress relationship as one model, the parameters of that model can be estimated using the complete likelihood function. Doing so, a common shape parameter is estimated for the model, thus eliminating the uncertainties of averaging the individual shape parameters. All uncertainties are accounted for in the form of confidence bounds (presented in detail in [[Appendix_D:_Confidence_Bounds|Appendix D]]), which are quantifiable because they are obtained based on the overall model.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Accelerated_Life_Test_Plans&amp;diff=64908</id>
		<title>Accelerated Life Test Plans</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Accelerated_Life_Test_Plans&amp;diff=64908"/>
		<updated>2017-02-06T16:47:21Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Test Plans for Two Stress Types */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Template:ALTABOOK_SUB|Additional Tools|Accelerated Life Test Plans}}&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
Poor accelerated test plans waste time, effort and money and may not even yield the desired information. Before starting an accelerated test (which is sometimes an expensive and difficult endeavor), it is advisable to have a plan that helps in accurately estimating reliability at operating conditions while minimizing test time and costs. A test plan should be used to decide on the appropriate stress levels that should be used (for each stress type) and the amount of the test units that need to be allocated to the different stress levels (for each combination of the different stress types&#039; levels). This section presents some common test plans for one-stress and two-stress accelerated tests.&lt;br /&gt;
&lt;br /&gt;
==General Assumptions==&lt;br /&gt;
&lt;br /&gt;
Most accelerated life testing plans use the following model and testing assumptions that correspond to many practical quantitative accelerated life testing problems.&lt;br /&gt;
&lt;br /&gt;
1. The log-time-to-failure for each unit follows a location-scale distribution such that:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{}{\overset{}{\mathop{\Pr }}}\,(Y\le y)=\Phi \left( \frac{y-\mu }{\sigma } \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt; are the location and scale parameters respectively and &amp;lt;math&amp;gt;\Phi \,\!&amp;lt;/math&amp;gt; ( &amp;lt;math&amp;gt;\cdot \,\!&amp;lt;/math&amp;gt; ) is the standard form of the location-scale distribution.&lt;br /&gt;
&lt;br /&gt;
2. Failure times for all test units, at all stress levels, are statistically independent.&lt;br /&gt;
&lt;br /&gt;
3. The location parameter &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; is a linear function of stress. Specifically, it is assumed that:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mu =\mu ({{z}_{1}})={{\gamma }_{0}}+{{\gamma }_{1}}x&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. The scale parameter, &amp;lt;math&amp;gt;\sigma ,\,\!&amp;lt;/math&amp;gt; does not depend on the stress levels. All units are tested until a pre-specified test time.&lt;br /&gt;
&lt;br /&gt;
5. Two of the most common models used in quantitative accelerated life testing are the linear Weibull and lognormal models. The Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y\sim SEV\left[ \mu (z)={{\gamma }_{0}}+{{\gamma }_{1}}x,\sigma  \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:where &amp;lt;math&amp;gt;SEV\,\!&amp;lt;/math&amp;gt; denotes the smallest extreme value distribution. The lognormal model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y\sim Normal\left[ \mu (z)={{\gamma }_{0}}+{{\gamma }_{1}}z,\sigma  \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:That is, log life &amp;lt;math&amp;gt;Y\,\!&amp;lt;/math&amp;gt; is assumed to have either an &amp;lt;math&amp;gt;SEV\,\!&amp;lt;/math&amp;gt; or a normal distribution with location parameter &amp;lt;math&amp;gt;\mu (z)\,\!&amp;lt;/math&amp;gt;, expressed as a linear function of &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; and constant scale parameter &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Planning Criteria and Problem Formulation==&lt;br /&gt;
Without loss of generality, a stress can be standardized as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\xi =\frac{x-{{x}_{D}}}{{{x}_{H}}-{{x}_{D}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{x}_{D}}\,\!&amp;lt;/math&amp;gt; is the use stress or design stress at which product life is of primary interest.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt; is the highest test stress level.&lt;br /&gt;
&lt;br /&gt;
The values of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{x}_{D}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt; refer to the actual values of stress or to the transformed values in case a transformation (e.g., the reciprocal transformation to obtain the Arrhenius relationship or the log transformation to obtain the power relationship) is used.&lt;br /&gt;
&lt;br /&gt;
Typically, there will be a limit on the highest level of stress for testing because the distribution and life-stress relationship assumptions hold only for a limited range of the stress. The highest test level of stress, &amp;lt;math&amp;gt;{{x}_{H}},\,\!&amp;lt;/math&amp;gt; can be determined based on engineering knowledge, preliminary tests or experience with similar products. Higher stresses will help end the test faster, but might violate your distribution and life-stress relationship assumptions.&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\xi =0\,\!&amp;lt;/math&amp;gt; at the design stress and &amp;lt;math&amp;gt;\xi =1\,\!&amp;lt;/math&amp;gt; at the highest test stress.&lt;br /&gt;
&lt;br /&gt;
A common purpose of an accelerated life test experiment is to estimate a particular percentile (unreliability value of &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt;), &amp;lt;math&amp;gt;{{T}_{p}}\,\!&amp;lt;/math&amp;gt;, in the lower tail of the failure distribution at use stress. Thus a natural criterion is to minimize &amp;lt;math&amp;gt;Var({{\hat{T}}_{p}})\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;{{Y}_{p}}=\ln ({{T}_{p}})\,\!&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; measures the precision of the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; quantile estimator; smaller values mean less variation in the value of &amp;lt;math&amp;gt;{{\hat{Y}}_{p}}\,\!&amp;lt;/math&amp;gt; in repeated samplings. Hence a good test plan should yield a relatively small, if not the minimum, &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; value. For the minimization problem, the decision variables are &amp;lt;math&amp;gt;{{\xi }_{i}}\,\!&amp;lt;/math&amp;gt; (the standardized stress level used in the test) and &amp;lt;math&amp;gt;{{\pi }_{i}}\,\!&amp;lt;/math&amp;gt; (the percentage of the total test units allocated at that level). The optimization problem can be formulized as follows.&lt;br /&gt;
&lt;br /&gt;
Minimize: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})=f({{\xi }_{i}},{{\pi }_{i}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Subject to:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0\le {{\pi }_{i}}\le 1,\text{ }i=1,2,...n\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{n}{\mathop{\sum }}}\,{{\pi }_{i}}=1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An optimum accelerated test plan requires algorithms to minimize &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Planning tests may involve compromise between efficiency and extrapolation. More failures correspond to better estimation efficiency, requiring higher stress levels but more extrapolation to the use condition. Choosing the best plan to consider must take into account the trade-offs between efficiency and extrapolation. Test plans with more stress levels are more robust than plans with fewer stress levels because they rely less on the validity of the life-stress relationship assumption. However, test plans with fewer stress levels can be more convenient.&lt;br /&gt;
&lt;br /&gt;
==Test Plans for a Single Stress Type==&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: ALTA_Test_Plan_Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
This section presents a discussion of  some of the most popular test plans used when only one stress factor is applied in the test. In order to design a test, the following information needs to be determined beforehand:&lt;br /&gt;
&lt;br /&gt;
1. The design stress, &amp;lt;math&amp;gt;{{x}_{D}},\,\!&amp;lt;/math&amp;gt; and the highest test stress, &amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
2. The test duration (or censoring time), &amp;lt;math&amp;gt;\Upsilon \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
3. The probability of failure at &amp;lt;math&amp;gt;{{x}_{D}}\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;(\xi =0)\,\!&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Upsilon \,\!&amp;lt;/math&amp;gt;, denoted as &amp;lt;math&amp;gt;{{P}_{D}},\,\!&amp;lt;/math&amp;gt; and at &amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;(\xi =1)\,\!&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Upsilon \,\!&amp;lt;/math&amp;gt;, denoted as &amp;lt;math&amp;gt;{{P}_{H}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Two Level Statistically Optimum Plan===&lt;br /&gt;
The Two Level Statistically Optimum Plan is the most important plan, as almost all other plans are derived from it. For this plan, the highest stress, &amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt;, and the design stress, &amp;lt;math&amp;gt;{{x}_{D}}\,\!&amp;lt;/math&amp;gt;, are pre-determined. The test is conducted at two levels. The high test level is fixed at &amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt;. The low test stress, &amp;lt;math&amp;gt;{{x}_{L}}\,\!&amp;lt;/math&amp;gt;, together with the proportion of the test units allocated to the low level, &amp;lt;math&amp;gt;{{\pi }_{L}}\,\!&amp;lt;/math&amp;gt;, are calculated such that &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; is minimized. Meeker and Hahn [[Appendix_E:_References|[36]]] present more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
===Three Level Best Standard Plan===&lt;br /&gt;
In this plan, three stress levels are used. Let us use &amp;lt;math&amp;gt;{{\xi }_{L}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{\xi }_{M}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\xi }_{H}}\,\!&amp;lt;/math&amp;gt; to denote the three standardized stress levels from lowest to highest with:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\xi }_{M}}=\frac{{{\xi }_{L}}+{{\xi }_{H}}}{2}=\frac{{{\xi }_{L}}+1}{2}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An equal number of units is tested at each level, &amp;lt;math&amp;gt;{{\pi }_{L}}={{\pi }_{M}}={{\pi }_{H}}=1/3\,\!&amp;lt;/math&amp;gt;. Therefore, the test plan is &amp;lt;math&amp;gt;({{\xi }_{L}},{{\xi }_{M}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\xi }_{H}},{{\pi }_{L}},{{\pi }_{M}},{{\pi }_{H}})=({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1,1/3,1/3,1/3)\,\!&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;{{\xi }_{L}}\,\!&amp;lt;/math&amp;gt; being the only decision variable. &amp;lt;math&amp;gt;{{\xi }_{L}}\,\!&amp;lt;/math&amp;gt; is determined such that &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; is minimized. Escobar and Meeker [[Appendix_E:_References|[37]]] present more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
===Three Level Best Compromise Plan===&lt;br /&gt;
In this plan, three stress levels are used &amp;lt;math&amp;gt;({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1).\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{\pi }_{M}}\,\!&amp;lt;/math&amp;gt;, which is a value between 0 and 1, is pre-determined. &amp;lt;math&amp;gt;{{\pi }_{M}}=0.1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\pi }_{M}}=0.2\,\!&amp;lt;/math&amp;gt; are commonly used; values less than or equal to 0.2 can give good results. The test plan is&lt;br /&gt;
&amp;lt;math&amp;gt;({{\xi }_{L}},{{\xi }_{M}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\xi }_{H}},{{\pi }_{L}},{{\pi }_{M}},{{\pi }_{H}})\,\!&amp;lt;/math&amp;gt; = &amp;lt;math&amp;gt;({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1,{{\pi }_{L}},{{\pi }_{M}},1-{{\pi }_{L}}-{{\pi }_{M}})\,\!&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;{{\xi }_{L}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\pi }_{L}}\,\!&amp;lt;/math&amp;gt; being the decision variables determined such that &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; is minimized. Meeker [[Appendix_E:_References|[38]]] presents more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
===Three Level Best Equal Expected Number Failing Plan===&lt;br /&gt;
In this plan, three stress levels are used &amp;lt;math&amp;gt;({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1)\,\!&amp;lt;/math&amp;gt; and there is a constraint that an equal number of failures at each stress level is expected. The constraint can be written as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{\pi }_{L}}{{P}_{L}}={{\pi }_{M}}{{P}_{M}}={{\pi }_{H}}{{P}_{H}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{P}_{L}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{P}_{M}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{P}_{H}}\,\!&amp;lt;/math&amp;gt;  are the failure probability at the low, middle and high test level, respectively. &amp;lt;math&amp;gt;{{P}_{L}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{P}_{M}}\,\!&amp;lt;/math&amp;gt; can be expressed in terms of &amp;lt;math&amp;gt;{{\xi }_{L}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\xi }_{M}}\,\!&amp;lt;/math&amp;gt;. Therefore, all variables can be expressed in terms of &amp;lt;math&amp;gt;{{\xi }_{L}},\,\!&amp;lt;/math&amp;gt; which is chosen such that &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; is minimized. Meeker [[Appendix_E:_References|[38]]] presents more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
===Three Level 4:2:1 Allocation Plan===&lt;br /&gt;
In this plan, three stress levels  are used &amp;lt;math&amp;gt;({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1).\,\!&amp;lt;/math&amp;gt; The allocation of units at each level is pre-given as &amp;lt;math&amp;gt;{{\pi }_{L}} : {{\pi }_{M}} : {{\pi }_{H}}=4 : 2 : 1\,\!&amp;lt;/math&amp;gt;. Therefore &amp;lt;math&amp;gt;{{\pi }_{L}}=4/7,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{\pi }_{M}}=2/7\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\pi }_{H}}=1/7\,\!&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;{{\xi }_{L}}\,\!&amp;lt;/math&amp;gt; is the only decision variable that is chosen such that &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; is minimized. The optimum &amp;lt;math&amp;gt;{{\xi }_{L}}\,\!&amp;lt;/math&amp;gt; can also be multiplied by a constant &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; (defined by the user) to make the low stress level closer to the use stress than to the optimized plan, in order to make a better extrapolation at the use stress. Meeker and Hahn [[Appendix_E:_References|[40]]] present more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
===Example of a Single Stress Test Plan===&lt;br /&gt;
{{:ALTA_Test_Plan_Example}}&lt;br /&gt;
&lt;br /&gt;
==Test Plans for Two Stress Types==&lt;br /&gt;
This section presents a discussion of some of the most popular test plans used when two stress factors are applied in the test and interactions are assumed not to exists between the factors. The location parameter &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; can be expressed as function of stresses &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt; or as a function of their normalized stress levels as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mu ={{\gamma }_{0}}+{{\gamma }_{1}}{{\xi }_{1}}+{{\gamma }_{2}}{{\xi }_{2}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to design a test, the following information needs to be determined beforehand:&lt;br /&gt;
&lt;br /&gt;
1. The stress limits (the design stress, &amp;lt;math&amp;gt;{{x}_{D}},\,\!&amp;lt;/math&amp;gt; and the highest test stress, &amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt;) of each stress type.&lt;br /&gt;
&lt;br /&gt;
2. The test time (or censoring time), &amp;lt;math&amp;gt;\Upsilon \,\!&amp;lt;/math&amp;gt;. .&lt;br /&gt;
&lt;br /&gt;
3. The probability of failure at &amp;lt;math&amp;gt;\Upsilon \,\!&amp;lt;/math&amp;gt; at three stress combinations. Usually &amp;lt;math&amp;gt;{{P}_{DD}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{P}_{HD}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{P}_{DH}}\,\!&amp;lt;/math&amp;gt; are used (&amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; indicates probability and the subscript &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; represents the design stress, while &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the highest stress level in the test).&lt;br /&gt;
&lt;br /&gt;
For two-stress test planning, two methods are available: the Three Level Optimum Plan and the&lt;br /&gt;
Five Level Best Compromise Plan.&lt;br /&gt;
&lt;br /&gt;
===Three Level Optimum Plan===&lt;br /&gt;
The Three Level Optimum Plan is obtained by first finding a one-stress degenerate Two Level Statistically Optimum Plan and splitting this degenerate plan into an appropriate two-stress plan. In a degenerate test plan, the test is conducted at any two (or more) stress level combinations on a line with slope &amp;lt;math&amp;gt;s\,\!&amp;lt;/math&amp;gt; that passes through the design &amp;lt;math&amp;gt;{{\xi }_{D}}=\left( {{\xi }_{1D}},{{\xi }_{2D}} \right)\,\!&amp;lt;/math&amp;gt;. Therefore, in the case of a degenerate design, we have:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\mu ={{\gamma }_{0}}+\left( {{\gamma }_{1}}+{{\gamma }_{2}}s \right){{\xi }_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Degenerate plans help reducing the two-stress problem into a one-stress problem. Although these degenerate plans do not allow the estimation of all the model parameters and would be an unlikely choice in practice, they are used as a starting point for developing more reasonable optimum and compromise test plans. After finding the one stress degenerate Two Level Statistically Optimum Plan using the methodology explained in 13.4.3.1, the plan is split into an appropriate Three Level Optimum Plan.&lt;br /&gt;
&lt;br /&gt;
The next figure illustrates the concept of the Three Level Optimum Plan for a two-stress test. &amp;lt;math&amp;gt;{{\xi }_{D}}\,\!&amp;lt;/math&amp;gt; is the (0,0) point. &amp;lt;math&amp;gt;{{C}_{O}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{C}_{1}}\,\!&amp;lt;/math&amp;gt; are the one-stress degenerate Two Level Statistically Optimum Plan. &amp;lt;math&amp;gt;{{C}_{1}}\,\!&amp;lt;/math&amp;gt;, which corresponds to ( &amp;lt;math&amp;gt;{{\xi }_{1}}=1,{{\xi }_{2}}=1\,\!&amp;lt;/math&amp;gt; ), is always used for this type of test and is the high stress level of the degenerate plan. &amp;lt;math&amp;gt;{{C}_{O}}\,\!&amp;lt;/math&amp;gt; corresponds to the low stress level of the degenerate plan. A line, &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, is drawn passing through &amp;lt;math&amp;gt;{{C}_{O}}\,\!&amp;lt;/math&amp;gt; such that all the points along the line have the same probability of failures at the end of the test with the stress levels of the &amp;lt;math&amp;gt;{{C}_{O}}\,\!&amp;lt;/math&amp;gt; plan. &amp;lt;math&amp;gt;{{C}_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{C}_{3}}\,\!&amp;lt;/math&amp;gt; are then determined by obtaining the intersections of &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; with the boundaries of the square.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA13.9.png|center|250px|Three Level Optimum Plan for two stresses.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;{{C}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{C}_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{C}_{3}}\,\!&amp;lt;/math&amp;gt; represent the the Three Level Optimum Plan. Readers are encouraged to review Escobar and Meeker [[Appendix_E:_References|[37]]] for more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
===Five Level Best Compromise Plan===&lt;br /&gt;
The Five Level Best Compromise Plan is obtained by first finding a degenerate one-stress Three Level Best Compromise Plan, using the methodology explained in the [[Additional Tools#Three Level Best Compromise Plan|Three Level Best Compromise Plan ]](with &amp;lt;math&amp;gt;{{\pi }_{M}}=0.2\,\!&amp;lt;/math&amp;gt;) , and splitting this degenerate plan into an appropriate two-stress plan.&lt;br /&gt;
&lt;br /&gt;
In the next figure, &amp;lt;math&amp;gt;{{\xi }_{D}}\,\!&amp;lt;/math&amp;gt; is the (0,0) point. &amp;lt;math&amp;gt;{{C}_{O1}},{{C}_{O2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{C}_{1}}\,\!&amp;lt;/math&amp;gt; are the degenerate one-stress Three Level Best Compromise Plan. Points along the &amp;lt;math&amp;gt;{{L}_{1}}\,\!&amp;lt;/math&amp;gt; line have the same probability of failure at the end of the &amp;lt;math&amp;gt;{{C}_{O1}}\,\!&amp;lt;/math&amp;gt; test plan, while points on &amp;lt;math&amp;gt;{{L}_{2}}\,\!&amp;lt;/math&amp;gt; have the same probability of failure at the end of the &amp;lt;math&amp;gt;{{C}_{O2}}\,\!&amp;lt;/math&amp;gt; test plan. &amp;lt;math&amp;gt;{{C}_{2}},{{C}_{3}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{C}_{4}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{C}_{5}}\,\!&amp;lt;/math&amp;gt; are then determined by obtaining the intersections of &amp;lt;math&amp;gt;{{L}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{L}_{2}}\,\!&amp;lt;/math&amp;gt; with the boundaries of the square.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA13.92.png|center|300px|Five level optimal test plan for two stresses.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;{{C}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{C}_{2}},{{C}_{3}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{C}_{4}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{C}_{5}}\,\!&amp;lt;/math&amp;gt; represent the the Five Level Best Compromise Plan. Readers are encouraged to review Escobar and Meeker [[Appendix_E:_References|[37]]] for more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;noprint&amp;quot;&amp;gt;&lt;br /&gt;
{{Examples_Box|ALTA_Examples|&amp;lt;p&amp;gt;More application examples are available! See also:&amp;lt;/p&amp;gt; &lt;br /&gt;
{{Examples Both|http://www.reliasoft.com/alta/examples/rc7/index.htm|Accelerated Life Test Plans|http://www.reliasoft.tv/alta/appexamples/alta_app_ex_7.html|Watch the video...}}&amp;lt;nowiki/&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Accelerated_Life_Test_Plans&amp;diff=64907</id>
		<title>Accelerated Life Test Plans</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Accelerated_Life_Test_Plans&amp;diff=64907"/>
		<updated>2017-02-06T16:45:48Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Planning Criteria and Problem Formulation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Template:ALTABOOK_SUB|Additional Tools|Accelerated Life Test Plans}}&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
Poor accelerated test plans waste time, effort and money and may not even yield the desired information. Before starting an accelerated test (which is sometimes an expensive and difficult endeavor), it is advisable to have a plan that helps in accurately estimating reliability at operating conditions while minimizing test time and costs. A test plan should be used to decide on the appropriate stress levels that should be used (for each stress type) and the amount of the test units that need to be allocated to the different stress levels (for each combination of the different stress types&#039; levels). This section presents some common test plans for one-stress and two-stress accelerated tests.&lt;br /&gt;
&lt;br /&gt;
==General Assumptions==&lt;br /&gt;
&lt;br /&gt;
Most accelerated life testing plans use the following model and testing assumptions that correspond to many practical quantitative accelerated life testing problems.&lt;br /&gt;
&lt;br /&gt;
1. The log-time-to-failure for each unit follows a location-scale distribution such that:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{}{\overset{}{\mathop{\Pr }}}\,(Y\le y)=\Phi \left( \frac{y-\mu }{\sigma } \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:where &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt; are the location and scale parameters respectively and &amp;lt;math&amp;gt;\Phi \,\!&amp;lt;/math&amp;gt; ( &amp;lt;math&amp;gt;\cdot \,\!&amp;lt;/math&amp;gt; ) is the standard form of the location-scale distribution.&lt;br /&gt;
&lt;br /&gt;
2. Failure times for all test units, at all stress levels, are statistically independent.&lt;br /&gt;
&lt;br /&gt;
3. The location parameter &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; is a linear function of stress. Specifically, it is assumed that:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mu =\mu ({{z}_{1}})={{\gamma }_{0}}+{{\gamma }_{1}}x&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. The scale parameter, &amp;lt;math&amp;gt;\sigma ,\,\!&amp;lt;/math&amp;gt; does not depend on the stress levels. All units are tested until a pre-specified test time.&lt;br /&gt;
&lt;br /&gt;
5. Two of the most common models used in quantitative accelerated life testing are the linear Weibull and lognormal models. The Weibull model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y\sim SEV\left[ \mu (z)={{\gamma }_{0}}+{{\gamma }_{1}}x,\sigma  \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:where &amp;lt;math&amp;gt;SEV\,\!&amp;lt;/math&amp;gt; denotes the smallest extreme value distribution. The lognormal model is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Y\sim Normal\left[ \mu (z)={{\gamma }_{0}}+{{\gamma }_{1}}z,\sigma  \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:That is, log life &amp;lt;math&amp;gt;Y\,\!&amp;lt;/math&amp;gt; is assumed to have either an &amp;lt;math&amp;gt;SEV\,\!&amp;lt;/math&amp;gt; or a normal distribution with location parameter &amp;lt;math&amp;gt;\mu (z)\,\!&amp;lt;/math&amp;gt;, expressed as a linear function of &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; and constant scale parameter &amp;lt;math&amp;gt;\sigma \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Planning Criteria and Problem Formulation==&lt;br /&gt;
Without loss of generality, a stress can be standardized as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\xi =\frac{x-{{x}_{D}}}{{{x}_{H}}-{{x}_{D}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{x}_{D}}\,\!&amp;lt;/math&amp;gt; is the use stress or design stress at which product life is of primary interest.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt; is the highest test stress level.&lt;br /&gt;
&lt;br /&gt;
The values of &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{x}_{D}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt; refer to the actual values of stress or to the transformed values in case a transformation (e.g., the reciprocal transformation to obtain the Arrhenius relationship or the log transformation to obtain the power relationship) is used.&lt;br /&gt;
&lt;br /&gt;
Typically, there will be a limit on the highest level of stress for testing because the distribution and life-stress relationship assumptions hold only for a limited range of the stress. The highest test level of stress, &amp;lt;math&amp;gt;{{x}_{H}},\,\!&amp;lt;/math&amp;gt; can be determined based on engineering knowledge, preliminary tests or experience with similar products. Higher stresses will help end the test faster, but might violate your distribution and life-stress relationship assumptions.&lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;math&amp;gt;\xi =0\,\!&amp;lt;/math&amp;gt; at the design stress and &amp;lt;math&amp;gt;\xi =1\,\!&amp;lt;/math&amp;gt; at the highest test stress.&lt;br /&gt;
&lt;br /&gt;
A common purpose of an accelerated life test experiment is to estimate a particular percentile (unreliability value of &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt;), &amp;lt;math&amp;gt;{{T}_{p}}\,\!&amp;lt;/math&amp;gt;, in the lower tail of the failure distribution at use stress. Thus a natural criterion is to minimize &amp;lt;math&amp;gt;Var({{\hat{T}}_{p}})\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;{{Y}_{p}}=\ln ({{T}_{p}})\,\!&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; measures the precision of the &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; quantile estimator; smaller values mean less variation in the value of &amp;lt;math&amp;gt;{{\hat{Y}}_{p}}\,\!&amp;lt;/math&amp;gt; in repeated samplings. Hence a good test plan should yield a relatively small, if not the minimum, &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; value. For the minimization problem, the decision variables are &amp;lt;math&amp;gt;{{\xi }_{i}}\,\!&amp;lt;/math&amp;gt; (the standardized stress level used in the test) and &amp;lt;math&amp;gt;{{\pi }_{i}}\,\!&amp;lt;/math&amp;gt; (the percentage of the total test units allocated at that level). The optimization problem can be formulized as follows.&lt;br /&gt;
&lt;br /&gt;
Minimize: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})=f({{\xi }_{i}},{{\pi }_{i}})\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Subject to:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;0\le {{\pi }_{i}}\le 1,\text{ }i=1,2,...n\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\underset{i=1}{\overset{n}{\mathop{\sum }}}\,{{\pi }_{i}}=1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An optimum accelerated test plan requires algorithms to minimize &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Planning tests may involve compromise between efficiency and extrapolation. More failures correspond to better estimation efficiency, requiring higher stress levels but more extrapolation to the use condition. Choosing the best plan to consider must take into account the trade-offs between efficiency and extrapolation. Test plans with more stress levels are more robust than plans with fewer stress levels because they rely less on the validity of the life-stress relationship assumption. However, test plans with fewer stress levels can be more convenient.&lt;br /&gt;
&lt;br /&gt;
==Test Plans for a Single Stress Type==&amp;lt;!-- THIS SECTION HEADER IS LINKED TO: ALTA_Test_Plan_Example. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
This section presents a discussion of  some of the most popular test plans used when only one stress factor is applied in the test. In order to design a test, the following information needs to be determined beforehand:&lt;br /&gt;
&lt;br /&gt;
1. The design stress, &amp;lt;math&amp;gt;{{x}_{D}},\,\!&amp;lt;/math&amp;gt; and the highest test stress, &amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
2. The test duration (or censoring time), &amp;lt;math&amp;gt;\Upsilon \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
3. The probability of failure at &amp;lt;math&amp;gt;{{x}_{D}}\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;(\xi =0)\,\!&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Upsilon \,\!&amp;lt;/math&amp;gt;, denoted as &amp;lt;math&amp;gt;{{P}_{D}},\,\!&amp;lt;/math&amp;gt; and at &amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;(\xi =1)\,\!&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Upsilon \,\!&amp;lt;/math&amp;gt;, denoted as &amp;lt;math&amp;gt;{{P}_{H}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Two Level Statistically Optimum Plan===&lt;br /&gt;
The Two Level Statistically Optimum Plan is the most important plan, as almost all other plans are derived from it. For this plan, the highest stress, &amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt;, and the design stress, &amp;lt;math&amp;gt;{{x}_{D}}\,\!&amp;lt;/math&amp;gt;, are pre-determined. The test is conducted at two levels. The high test level is fixed at &amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt;. The low test stress, &amp;lt;math&amp;gt;{{x}_{L}}\,\!&amp;lt;/math&amp;gt;, together with the proportion of the test units allocated to the low level, &amp;lt;math&amp;gt;{{\pi }_{L}}\,\!&amp;lt;/math&amp;gt;, are calculated such that &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; is minimized. Meeker and Hahn [[Appendix_E:_References|[36]]] present more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
===Three Level Best Standard Plan===&lt;br /&gt;
In this plan, three stress levels are used. Let us use &amp;lt;math&amp;gt;{{\xi }_{L}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{\xi }_{M}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\xi }_{H}}\,\!&amp;lt;/math&amp;gt; to denote the three standardized stress levels from lowest to highest with:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\xi }_{M}}=\frac{{{\xi }_{L}}+{{\xi }_{H}}}{2}=\frac{{{\xi }_{L}}+1}{2}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An equal number of units is tested at each level, &amp;lt;math&amp;gt;{{\pi }_{L}}={{\pi }_{M}}={{\pi }_{H}}=1/3\,\!&amp;lt;/math&amp;gt;. Therefore, the test plan is &amp;lt;math&amp;gt;({{\xi }_{L}},{{\xi }_{M}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\xi }_{H}},{{\pi }_{L}},{{\pi }_{M}},{{\pi }_{H}})=({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1,1/3,1/3,1/3)\,\!&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;{{\xi }_{L}}\,\!&amp;lt;/math&amp;gt; being the only decision variable. &amp;lt;math&amp;gt;{{\xi }_{L}}\,\!&amp;lt;/math&amp;gt; is determined such that &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; is minimized. Escobar and Meeker [[Appendix_E:_References|[37]]] present more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
===Three Level Best Compromise Plan===&lt;br /&gt;
In this plan, three stress levels are used &amp;lt;math&amp;gt;({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1).\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{\pi }_{M}}\,\!&amp;lt;/math&amp;gt;, which is a value between 0 and 1, is pre-determined. &amp;lt;math&amp;gt;{{\pi }_{M}}=0.1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\pi }_{M}}=0.2\,\!&amp;lt;/math&amp;gt; are commonly used; values less than or equal to 0.2 can give good results. The test plan is&lt;br /&gt;
&amp;lt;math&amp;gt;({{\xi }_{L}},{{\xi }_{M}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\xi }_{H}},{{\pi }_{L}},{{\pi }_{M}},{{\pi }_{H}})\,\!&amp;lt;/math&amp;gt; = &amp;lt;math&amp;gt;({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1,{{\pi }_{L}},{{\pi }_{M}},1-{{\pi }_{L}}-{{\pi }_{M}})\,\!&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;{{\xi }_{L}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\pi }_{L}}\,\!&amp;lt;/math&amp;gt; being the decision variables determined such that &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; is minimized. Meeker [[Appendix_E:_References|[38]]] presents more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
===Three Level Best Equal Expected Number Failing Plan===&lt;br /&gt;
In this plan, three stress levels are used &amp;lt;math&amp;gt;({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1)\,\!&amp;lt;/math&amp;gt; and there is a constraint that an equal number of failures at each stress level is expected. The constraint can be written as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{\pi }_{L}}{{P}_{L}}={{\pi }_{M}}{{P}_{M}}={{\pi }_{H}}{{P}_{H}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{P}_{L}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{P}_{M}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{P}_{H}}\,\!&amp;lt;/math&amp;gt;  are the failure probability at the low, middle and high test level, respectively. &amp;lt;math&amp;gt;{{P}_{L}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{P}_{M}}\,\!&amp;lt;/math&amp;gt; can be expressed in terms of &amp;lt;math&amp;gt;{{\xi }_{L}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\xi }_{M}}\,\!&amp;lt;/math&amp;gt;. Therefore, all variables can be expressed in terms of &amp;lt;math&amp;gt;{{\xi }_{L}},\,\!&amp;lt;/math&amp;gt; which is chosen such that &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; is minimized. Meeker [[Appendix_E:_References|[38]]] presents more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
===Three Level 4:2:1 Allocation Plan===&lt;br /&gt;
In this plan, three stress levels  are used &amp;lt;math&amp;gt;({{\xi }_{L}},\tfrac{{{\xi }_{L}}+1}{2}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1).\,\!&amp;lt;/math&amp;gt; The allocation of units at each level is pre-given as &amp;lt;math&amp;gt;{{\pi }_{L}} : {{\pi }_{M}} : {{\pi }_{H}}=4 : 2 : 1\,\!&amp;lt;/math&amp;gt;. Therefore &amp;lt;math&amp;gt;{{\pi }_{L}}=4/7,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{\pi }_{M}}=2/7\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\pi }_{H}}=1/7\,\!&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;{{\xi }_{L}}\,\!&amp;lt;/math&amp;gt; is the only decision variable that is chosen such that &amp;lt;math&amp;gt;Var({{\hat{Y}}_{p}})\,\!&amp;lt;/math&amp;gt; is minimized. The optimum &amp;lt;math&amp;gt;{{\xi }_{L}}\,\!&amp;lt;/math&amp;gt; can also be multiplied by a constant &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; (defined by the user) to make the low stress level closer to the use stress than to the optimized plan, in order to make a better extrapolation at the use stress. Meeker and Hahn [[Appendix_E:_References|[40]]] present more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
===Example of a Single Stress Test Plan===&lt;br /&gt;
{{:ALTA_Test_Plan_Example}}&lt;br /&gt;
&lt;br /&gt;
==Test Plans for Two Stress Types==&lt;br /&gt;
This section presents a discussion of some of the most popular test plans used when two stress factors are applied in the test and interactions are assumed not to exists between the factors. The location parameter &amp;lt;math&amp;gt;\mu \,\!&amp;lt;/math&amp;gt; can be expressed as function of stresses &amp;lt;math&amp;gt;{{x}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{x}_{2}}\,\!&amp;lt;/math&amp;gt; or as a function of their normalized stress levels as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mu ={{\gamma }_{0}}+{{\gamma }_{1}}{{\xi }_{1}}+{{\gamma }_{2}}{{\xi }_{2}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to design a test, the following information needs to be determined beforehand:&lt;br /&gt;
&lt;br /&gt;
1. The stress limits (the design stress, &amp;lt;math&amp;gt;{{x}_{D}},\,\!&amp;lt;/math&amp;gt; and the highest test stress, &amp;lt;math&amp;gt;{{x}_{H}}\,\!&amp;lt;/math&amp;gt; ) of each stress type.&lt;br /&gt;
&lt;br /&gt;
2. The test time (or censoring time), &amp;lt;math&amp;gt;\Upsilon \,\!&amp;lt;/math&amp;gt;. .&lt;br /&gt;
&lt;br /&gt;
3. The probability of failure at &amp;lt;math&amp;gt;\Upsilon \,\!&amp;lt;/math&amp;gt; at three stress combinations. Usually &amp;lt;math&amp;gt;{{P}_{DD}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{P}_{HD}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{P}_{DH}}\,\!&amp;lt;/math&amp;gt; are used ( &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; indicates probability and the subscript &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; represents the design stress, while &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; represents the highest stress level in the test).&lt;br /&gt;
&lt;br /&gt;
For two-stress test planning, two methods are available: the Three Level Optimum Plan and the&lt;br /&gt;
Five Level Best Compromise Plan.&lt;br /&gt;
&lt;br /&gt;
===Three Level Optimum Plan===&lt;br /&gt;
The Three Level Optimum Plan is obtained by first finding a one-stress degenerate Two Level Statistically Optimum Plan and splitting this degenerate plan into an appropriate two-stress plan. In a degenerate test plan, the test is conducted at any two (or more) stress level combinations on a line with slope &amp;lt;math&amp;gt;s\,\!&amp;lt;/math&amp;gt; that passes through the design &amp;lt;math&amp;gt;{{\xi }_{D}}=\left( {{\xi }_{1D}},{{\xi }_{2D}} \right)\,\!&amp;lt;/math&amp;gt;. Therefore, in the case of a degenerate design, we have:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\mu ={{\gamma }_{0}}+\left( {{\gamma }_{1}}+{{\gamma }_{2}}s \right){{\xi }_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Degenerate plans help reducing the two-stress problem into a one-stress problem. Although these degenerate plans do not allow the estimation of all the model parameters and would be an unlikely choice in practice, they are used as a starting point for developing more reasonable optimum and compromise test plans. After finding the one stress degenerate Two Level Statistically Optimum Plan using the methodology explained in 13.4.3.1, the plan is split into an appropriate Three Level Optimum Plan.&lt;br /&gt;
&lt;br /&gt;
The next figure illustrates the concept of the Three Level Optimum Plan for a two-stress test. &amp;lt;math&amp;gt;{{\xi }_{D}}\,\!&amp;lt;/math&amp;gt; is the (0,0) point. &amp;lt;math&amp;gt;{{C}_{O}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{C}_{1}}\,\!&amp;lt;/math&amp;gt; are the one-stress degenerate Two Level Statistically Optimum Plan. &amp;lt;math&amp;gt;{{C}_{1}}\,\!&amp;lt;/math&amp;gt;, which corresponds to ( &amp;lt;math&amp;gt;{{\xi }_{1}}=1,{{\xi }_{2}}=1\,\!&amp;lt;/math&amp;gt; ), is always used for this type of test and is the high stress level of the degenerate plan. &amp;lt;math&amp;gt;{{C}_{O}}\,\!&amp;lt;/math&amp;gt; corresponds to the low stress level of the degenerate plan. A line, &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt;, is drawn passing through &amp;lt;math&amp;gt;{{C}_{O}}\,\!&amp;lt;/math&amp;gt; such that all the points along the line have the same probability of failures at the end of the test with the stress levels of the &amp;lt;math&amp;gt;{{C}_{O}}\,\!&amp;lt;/math&amp;gt; plan. &amp;lt;math&amp;gt;{{C}_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{C}_{3}}\,\!&amp;lt;/math&amp;gt; are then determined by obtaining the intersections of &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; with the boundaries of the square.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA13.9.png|center|250px|Three Level Optimum Plan for two stresses.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;{{C}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{C}_{2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{C}_{3}}\,\!&amp;lt;/math&amp;gt; represent the the Three Level Optimum Plan. Readers are encouraged to review Escobar and Meeker [[Appendix_E:_References|[37]]] for more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
===Five Level Best Compromise Plan===&lt;br /&gt;
The Five Level Best Compromise Plan is obtained by first finding a degenerate one-stress Three Level Best Compromise Plan, using the methodology explained in the [[Additional Tools#Three Level Best Compromise Plan|Three Level Best Compromise Plan ]](with &amp;lt;math&amp;gt;{{\pi }_{M}}=0.2\,\!&amp;lt;/math&amp;gt;) , and splitting this degenerate plan into an appropriate two-stress plan.&lt;br /&gt;
&lt;br /&gt;
In the next figure, &amp;lt;math&amp;gt;{{\xi }_{D}}\,\!&amp;lt;/math&amp;gt; is the (0,0) point. &amp;lt;math&amp;gt;{{C}_{O1}},{{C}_{O2}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{C}_{1}}\,\!&amp;lt;/math&amp;gt; are the degenerate one-stress Three Level Best Compromise Plan. Points along the &amp;lt;math&amp;gt;{{L}_{1}}\,\!&amp;lt;/math&amp;gt; line have the same probability of failure at the end of the &amp;lt;math&amp;gt;{{C}_{O1}}\,\!&amp;lt;/math&amp;gt; test plan, while points on &amp;lt;math&amp;gt;{{L}_{2}}\,\!&amp;lt;/math&amp;gt; have the same probability of failure at the end of the &amp;lt;math&amp;gt;{{C}_{O2}}\,\!&amp;lt;/math&amp;gt; test plan. &amp;lt;math&amp;gt;{{C}_{2}},{{C}_{3}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{C}_{4}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{C}_{5}}\,\!&amp;lt;/math&amp;gt; are then determined by obtaining the intersections of &amp;lt;math&amp;gt;{{L}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{L}_{2}}\,\!&amp;lt;/math&amp;gt; with the boundaries of the square.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA13.92.png|center|300px|Five level optimal test plan for two stresses.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;{{C}_{1}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{C}_{2}},{{C}_{3}}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{C}_{4}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{C}_{5}}\,\!&amp;lt;/math&amp;gt; represent the the Five Level Best Compromise Plan. Readers are encouraged to review Escobar and Meeker [[Appendix_E:_References|[37]]] for more details about this test plan.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;noprint&amp;quot;&amp;gt;&lt;br /&gt;
{{Examples_Box|ALTA_Examples|&amp;lt;p&amp;gt;More application examples are available! See also:&amp;lt;/p&amp;gt; &lt;br /&gt;
{{Examples Both|http://www.reliasoft.com/alta/examples/rc7/index.htm|Accelerated Life Test Plans|http://www.reliasoft.tv/alta/appexamples/alta_app_ex_7.html|Watch the video...}}&amp;lt;nowiki/&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Time-Varying_Stress_Models&amp;diff=64903</id>
		<title>Time-Varying Stress Models</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Time-Varying_Stress_Models&amp;diff=64903"/>
		<updated>2017-02-01T23:58:47Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Mathematical Formulation for a Step-Stress Model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|10}}&lt;br /&gt;
Traditionally, accelerated tests that use a time-varying stress application have been used to assure failures quickly. This is highly desirable given the pressure on industry today to shorten new product introduction time. The most basic type of time-varying stress test is a step-stress test. In step-stress accelerated testing, the test units are subjected to successively higher stress levels in predetermined stages, and thus follow a time-varying stress profile. The units usually start at a lower stress level and at a predetermined time, or failure number, the stress is increased and the test continues. The test is terminated when all units have failed, when a certain number of failures are observed or when a certain time has elapsed. Step-stress testing can substantially shorten the reliability test&#039;s duration. In addition to step-stress testing, there are many other types of time-varying stress profiles that can be used in accelerated life testing. However, it should be noted that there is more uncertainty in the results from such time-varying stress tests than from traditional constant stress tests of the same length and sample size.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When dealing with data from accelerated tests with time-varying stresses, the life-stress relationship must take into account the cumulative effect of the applied stresses. Such a model is commonly referred to as a &#039;&#039;cumulative damage&#039;&#039; or &#039;&#039;cumulative exposure&#039;&#039; model. Nelson [[Appendix_E:_References|[28]]] defines and presents the derivation and assumptions of such a model. ALTA includes the cumulative damage model for the analysis of time-varying stress data. This section presents an introduction to the model formulation and its application.&lt;br /&gt;
&lt;br /&gt;
=Model Formulation=&lt;br /&gt;
To formulate the cumulative exposure/damage model, consider a simple step-stress experiment where an electronic component was subjected to a voltage stress, starting at 2V (use stress level) and increased to 7V in stepwise increments, as shown in the next figure. The following steps, in hours, were used to apply stress to the products under test: 0 to 250, 2V; 250 to 350, 3V; 350 to 370, 4V; 370 to 380, 5V; 380 to 390, 6V; and 390 to 400, 7V.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA12.1.gif|center|550px|Step profile for a simple voltage stress test.]]&lt;br /&gt;
&lt;br /&gt;
In this example, 11 units were available for the test. All units were tested using this same stress profile. Units that failed were removed from the test and their total times on test were recorded. The following times-to-failure were observed in the test, in hours: 280, 310, 330, 352, 360, 366, 371, 374, 378, 381 and 385. The first failure in this test occurred at 280 hours when the stress was 3V. During the test, this unit experienced a period of time at 2V before failing at 3V. If the stress were 2V, one would expect the unit to fail at a time later than 280 hours, while if the unit were always at 3V, one would expect that failure time to be sooner than 280 hrs. The problem faced by the analyst in this case is to determine some equivalency between the stresses. In other words, what is the equivalent of 280 hours (with 250 hours spent at 2V and 30 hours spent at 3V) at a constant 2V stress or at a constant 3V stress?&lt;br /&gt;
&lt;br /&gt;
==Mathematical Formulation for a Step-Stress Model==&lt;br /&gt;
To mathematically formulate the model, consider the step-stress test shown in the next figure, with stresses S1, S2 and S3. Furthermore, assume that the underlying life distribution is the Weibull distribution, and also assume an inverse power law relationship between the Weibull scale parameter and the applied stress.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA12.2.png|center|300px|Step-stress profile and the corresponding life distributions.]]&lt;br /&gt;
&lt;br /&gt;
From the inverse power law relationship, the scale parameter, &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, of the Weibull distribution can be expressed as an inverse power function of the stress, &amp;lt;math&amp;gt;V \,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\eta(V)=\frac{1}{{{K}{V}}^\eta } \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
The fraction of the units failing by time &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; under a constant stress &amp;lt;math&amp;gt;V = {{S}_{1}}\,\!&amp;lt;/math&amp;gt;, is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
F(t;V)=1-R(t;V)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t;V)={{e}^{-{{\left[ \tfrac{t}{\eta (V)} \right]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;cdf&#039;&#039; for each constant stress level is: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{F}_{1}}(t;{{S}_{1}})= &amp;amp; 1-{{e}^{-{{(KS_{1}^{n}t)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{F}_{2}}(t;{{S}_{2}})= &amp;amp; 1-{{e}^{-{{(KS_{2}^{n}t)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{F}_{3}}(t;{{S}_{3}})= &amp;amp; 1-{{e}^{-{{(KS_{3}^{n}t)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above equations would suffice if the units did not experience different stresses during the test, as they did in this case. To analyze the data from this step-stress test, a cumulative exposure model is needed. Such a model will relate the life distribution, in this case the Weibull distribution, of the units at one stress level to the distribution at the next stress level. In formulating this model, it is assumed that the remaining life of the test units depends only on the cumulative exposure the units have seen and that the units do not remember how such exposure was accumulated. Moreover, since the units are held at a constant stress at each step, the surviving units will fail according to the distribution at the current step, but with a starting age corresponding to the total accumulated time up to the beginning of the current step. This model can be formulated as follows:&lt;br /&gt;
&lt;br /&gt;
*Units failing during the first step have not experienced any other stresses and will fail according to the &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &#039;&#039;cdf&#039;&#039;. Units that made it to the second step will fail according to the &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &#039;&#039;cdf&#039;&#039;, but will have accumulated some equivalent age, &amp;lt;math&amp;gt;{{\varepsilon }_{1}},\,\!&amp;lt;/math&amp;gt; at this stress level (given the fact that they have spent &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; hours at &amp;lt;math&amp;gt;{{S}_{1}})\,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}(t;{{S}_{2}})=1-{{e}^{-{{[KS_{2}^{n}((t-{{t}_{1}})+{{\varepsilon }_{1}})]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
In other words, the probability that the units will fail at a time, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, while at &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; and between &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{t}_{2}}\,\!&amp;lt;/math&amp;gt; is equivalent to the probability that the units would fail after accumulating &amp;lt;math&amp;gt;(t-{{t}_{1}})\,\!&amp;lt;/math&amp;gt; plus some equivalent time, &amp;lt;math&amp;gt;{{\varepsilon }_{1}},\,\!&amp;lt;/math&amp;gt; to account for the exposure the units have seen at &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*The equivalent time, &amp;lt;math&amp;gt;{{\varepsilon }_{1}},\,\!&amp;lt;/math&amp;gt; will be the time by which the probability of failure at &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; is equal to the probability of failure at &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; after an exposure of &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
	  {{F}_{1}}({{t}_{1}};{{S}_{1}})=\ &amp;amp; {{F}_{2}}({{\varepsilon }_{1}};{{S}_{2}}) \\ &lt;br /&gt;
	 1-{{e}^{-{{(KS_{1}^{n}{{t}_{1}})}^{\beta }}}}=\ &amp;amp; 1-{{e}^{-{{(KS_{2}^{n}{{\varepsilon }_{1}})}^{\beta }}}} \\ &lt;br /&gt;
	 S_{1}^{n}{{t}_{1}}=\ &amp;amp; S_{2}^{n}{{\varepsilon }_{1}} \\ &lt;br /&gt;
	 {{\varepsilon }_{1}}=\ &amp;amp; {{t}_{1}}{{\left( \frac{{{S}_{1}}}{{{S}_{2}}} \right)}^{n}}  &lt;br /&gt;
	\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
*One would repeat this for step 3 taking into account the accumulated exposure during steps 1 and 2, or in more general terms and for the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; step: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{i}}(t;{{S}_{i}})=1-{{e}^{-{{[KS_{i}^{n}((t-{{t}_{i-1}})+{{\varepsilon }_{i-1}})]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\varepsilon }_{i-1}}=({{t}_{i-1}}-{{t}_{i-2}}+{{\varepsilon }_{i-2}}){{\left( \frac{{{S}_{i-1}}}{{{S}_{i}}} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
*Once the &#039;&#039;cdf&#039;&#039; for each step has been obtained, the  &#039;&#039;pdf&#039;&#039;  can also then be determined utilizing: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{f}_{i}}(t,{{S}_{i}})=-\frac{d}{dt}\left[ {{F}_{i}}(t,{{S}_{i}}) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
Once the model has been formulated, model parameters (i.e., &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;) can be computed utilizing maximum likelihood estimation methods.&lt;br /&gt;
&lt;br /&gt;
The previous example can be expanded for any time-varying stress. ALTA allows you to define any stress profile. For example, the stress can be a ramp stress, a monotonically increasing stress, sinusoidal, etc. This section presents a generalized formulation of the cumulative damage model, where stress can be any function of time.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;noprint&amp;quot;&amp;gt;&lt;br /&gt;
{{Example:CD-GLL_Weibull}}&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage Power Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where stress can be any function of time and the life-stress relationship is based on the power relationship. Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship,  the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))={{\left( \frac{a}{x(t)} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In ALTA, the above relationship is actually presented in a format consistent with the general log-linear (GLL) relationship for the power law relationship:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))={{e}^{{{\alpha }_{0}}+{{\alpha }_{1}}\ln \left( x(t) \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, instead of displaying &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; as the calculated parameters, the following reparameterization is used:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 {{\alpha }_{0}}=\ &amp;amp; \ln ({{a}^{n}}) \\ &lt;br /&gt;
 {{\alpha }_{1}}=\ &amp;amp; -n  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Power - Exponential==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship, the mean life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m(t,\,x)}=s(t,\,x)={{\left( \frac{x(t)}{a} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\,x(t))={{e}^{-I(t,\,x)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}_{}^{}}}\,}}\,{{\left( \frac{x(u)}{a} \right)}^{n}}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\,x)=s(t,\,x){{e}^{-I(t,\,x)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest (e.g., mean life, failure rate, etc.) can be obtained utilizing the statistical properties definitions presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},\,{{x}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},\,{{x}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },\,x_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },\,x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },\,x_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },\,x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },\,x_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
 &lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
 &lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
 &lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Power - Weibull==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship, the characteristic life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta (t,x)}=s(t,x)={{\left( \frac{x(t)}{a} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,x(t))={{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,{{\left( \frac{x(u)}{a} \right)}^{n}}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\beta s(t,x){{\left( I(t,x) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{x}_{i}}){{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta -1}}]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta }} -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt;  is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cumulative Damage-Power-Weibull Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Using the simple step-stress data given [[Time-Varying Stress Models#Model Formulation|here]], one would define &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt;  as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 x(t)=\ &amp;amp; 2,\text{    }0&amp;lt;t\le 250 \\ &lt;br /&gt;
 =\ &amp;amp; 3,\text{    }250&amp;lt;t\le 350 \\ &lt;br /&gt;
 =\ &amp;amp; 4,\text{    }350&amp;lt;t\le 370 \\ &lt;br /&gt;
 =\ &amp;amp; 5,\text{    }370&amp;lt;t\le 380 \\ &lt;br /&gt;
 =\ &amp;amp; 6,\text{    }380&amp;lt;t\le 390 \\ &lt;br /&gt;
 =\ &amp;amp; 7,\text{    }390&amp;lt;t\le +\infty   &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assuming a power relation as the underlying life-stress relationship and the Weibull distribution as the underlying life distribution, one can then formulate the log-likelihood function for the above data set as,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L) = \Lambda =\overset{F}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,\ln \left\{ \beta {{\left[ \frac{x(t)}{a} \right]}^{n}}{{\left[ \int_{0}^{{{t}_{i}}}{{\left[ \frac{\left[ x(u) \right]}{a} \right]}^{n}}du \right]}^{\beta -1}} \right\} -\overset{F}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,\left\{ {{\left[ \int_{0}^{{{t}_{i}}}{{\left[ \frac{\left[ x(u) \right]}{a} \right]}^{n}}du \right]}^{\beta }} \right\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; is the number of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are the IPL parameters.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; is the stress profile function.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure.&lt;br /&gt;
&lt;br /&gt;
The parameter estimates for &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{n}\,\!&amp;lt;/math&amp;gt; can be obtained by simultaneously solving, &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial a}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt;. Using ALTA, the parameter estimates for this data set are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \widehat{\beta }=\ &amp;amp; 2.67829 \\ &lt;br /&gt;
  \widehat{\alpha }=\ &amp;amp; 11.72208 \\ &lt;br /&gt;
  \widehat{n}=\ &amp;amp; 3.998466  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the parameters are obtained, one can now determine the reliability for these units at any time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; and stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R\left( t,x\left( t \right) \right)={{e}^{-{{\left[ \int_{0}^{t}{{\left[ \tfrac{x(u)}{a} \right]}^{n}}du \right]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or at a fixed stress level &amp;lt;math&amp;gt;x(t)=2 \text{ V}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;t=300 \text{ hours}\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R\left( t=300,x(t)=2 \right)={{e}^{-{{\left[ \int_{0}^{t}{{\left[ \tfrac{x(u)}{a} \right]}^{n}}du \right]}^{\beta }}}}=97.5%\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The mean time to failure &amp;lt;math&amp;gt;(MTTF)\,\!&amp;lt;/math&amp;gt; at any stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; can be determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MTTF\left( x\left( t \right) \right)=\int_{0}^{\infty }t\left[ \left\{ \beta {{\left[ \frac{x\left( t \right)}{a} \right]}^{n}}{{\left[ \int_{0}^{t}{{\left[ \frac{x\left( u \right)}{a} \right]}^{n}}du \right]}^{\beta -1}} \right\}{{e}^{-{{\left[ \int_{0}^{t}{{\left[ \tfrac{x(u)}{a} \right]}^{n}}du \right]}^{\beta }}}} \right]dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or at a fixed stress level &amp;lt;math&amp;gt;x\left( t \right)=2 \text{ V}\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MTTF\left( x\left( t \right) \right)=1046.3 \text{ hours}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any other metric of interest (e.g., failure rate, conditional reliability etc.) can also be determined using the basic definitions given in [[Appendix A: Brief Statistical Background|Appendix A]] and calculated automatically with ALTA.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Power - Lognormal==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship, the median life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,x)}=s(t,x)={{\left( \frac{x(t)}{a} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))=1-\Phi (z)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,x)=\frac{\ln I(t,x)}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,{{\left( \frac{x(u)}{a} \right)}^{n}}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\frac{s(t,x)\varphi (z(t,x))}{\sigma _{T}^{\prime }I(t,x)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{x}_{i}})\varphi (z({{T}_{i}},{{x}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{x}_{i}})}] \overset{S}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },x_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the    interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage Arrhenius Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where stress can be any function of time and the life-stress relationship is based on the Arrhenius life-stress relationship. Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))=C{{e}^{\tfrac{b}{x(t)}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In ALTA, the above relationship is actually presented in a format consistent with the general log-linear (GLL) relationship for the Arrhenius relationship:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))={{e}^{{{\alpha }_{0}}+{{\alpha }_{1}}\tfrac{1}{x(t)}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, instead of displaying &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; as the calculated parameters, the following reparameterization is used:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 {{\alpha }_{0}}=\ &amp;amp; \ln (C) \\ &lt;br /&gt;
 {{\alpha }_{1}}=\ &amp;amp; b  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Arrhenius - Exponential==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the mean life is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m(t,x)}=s(t,x)=\frac{{{e}^{\tfrac{-b}{x(t)}}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))={{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{\tfrac{-b}{x(u)}}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
f(t,x)=s(t,x){{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},{{x}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},{{x}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Arrhenius - Weibull==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the characteristic life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta (t,x)}=s(t,x)=\frac{{{e}^{\tfrac{-b}{x(t)}}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,x(t))={{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{{{e}^{\tfrac{-b}{x(u)}}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\beta s(t,x){{\left( I(t,x) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{x}_{i}}){{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta -1}}]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta }} -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{(I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime }))}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Arrhenius - Lognormal==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the median life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,x)}=s(t,x)=\frac{{{e}^{\tfrac{-b}{x(t)}}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))=1-\Phi (z)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,x)=\frac{\ln I(t,x)}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{\tfrac{-b}{x(u)}}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\frac{s(t,x)\varphi (z(t,x))}{\sigma _{T}^{\prime }I(t,x)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{x}_{i}})\varphi (z({{T}_{i}},{{x}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{x}_{i}})}] \overset{S}{\mathop{+\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },x_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage Exponential Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where stress can be any function of time and the life-stress relationship is based on the exponential relationship. Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential relationship, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
L(x(t))=C{{e}^{bx(t)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
In ALTA, the above relationship is actually presented in a format consistent with the general log-linear (GLL) relationship for the exponential relationship:&lt;br /&gt;
&lt;br /&gt;
Therefore, instead of displaying &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; as the calculated parameters, the following reparameterization is used:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 {{\alpha }_{0}}=\ &amp;amp; \ln (C) \\ &lt;br /&gt;
 {{\alpha }_{1}}=\ &amp;amp; b  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Exponential - Exponential==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential life-stress relationship, the mean life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m(t,x)}=s(t,x)=\frac{{{e}^{-bx(t)}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))={{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{-bx(u)}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
f(t,x)=s(t,x){{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},{{x}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},{{x}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Exponential - Weibull==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential life-stress relationship, the characteristic life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta (t,x)}=s(t,x)=\frac{{{e}^{-b\cdot x(t)}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,x(t))={{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{-bx(u)}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\beta s(t,x){{\left( I(t,x) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{x}_{i}}){{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta -1}}] -\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta }}-\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Exponential - Lognormal==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential life-stress relationship, the median life is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,x)}=s(t,x)=\frac{{{e}^{-bx(t)}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))=1-\Phi (z)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,x)=\frac{\ln I(t,x)}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{{{e}^{-bx(u)}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\frac{s(t,x)\varphi (z(t,x))}{\sigma _{T}^{\prime }I(t,x)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{x}_{i}})\varphi (z({{T}_{i}},{{x}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{x}_{i}})}] \overset{S}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },x_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage General Log-Linear Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where multiple stress types are used in the analysis and where the stresses can be any function of time.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage General Log-Linear - Exponential==&lt;br /&gt;
Given &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; time-varying stresses &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}}(t),{{X}_{2}}(t)...{{X}_{n}}(t))\,\!&amp;lt;/math&amp;gt;, the life-stress relationship is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m\left( t,\overset{\_}{\mathop{x}}\, \right)}=s(t,\overset{\_}{\mathop{x}}\,)={{e}^{-{{a}_{0}}-\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{a}_{j}}{{x}_{j}}(t)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
where &amp;lt;math&amp;gt;{{\alpha }_{0}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
This relationship can be further modified through the use of transformations and can be reduced to the relationships discussed previously (power, Arrhenius and exponential), if so desired.&lt;br /&gt;
The exponential reliability function of the unit under multiple stresses is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\overset{\_}{\mathop{x}}\,)={{e}^{-I(t,\overset{\_}{\mathop{x}}\,)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\overset{\_}{\mathop{x}}\,)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{du}{{{e}^{^{^{{{\alpha }_{0}}+\overset{n}{\mathop{\underset{j=1}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(t)}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\overset{\_}{\mathop{x}}\,)=s(t,\overset{\_}{\mathop{x}}\,){{e}^{-I(t,\overset{\_}{\mathop{x}}\,)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage General Log-Linear - Weibull==&lt;br /&gt;
Given &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; time-varying stresses &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}}(t),{{X}_{2}}(t)...{{X}_{n}}(t))\,\!&amp;lt;/math&amp;gt;, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta \left( t,\overset{\_}{\mathop{x}}\, \right)}=s(t,\overset{\_}{\mathop{x}}\,)={{e}^{^{^{-{{a}_{0}}-\overset{n}{\mathop{\underset{j=1}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(t)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
&lt;br /&gt;
The Weibull reliability function of the unit under multiple stresses is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\overset{\_}{\mathop{x}}\,)={{e}^{-{{\left( I(t,\overset{\_}{\mathop{x}}\,) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\overset{\_}{\mathop{x}}\,)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{du}{{{e}^{^{{{a}_{0}}+\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(u)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\overset{\_}{\mathop{x}}\,)=\beta s(t,\overset{\_}{\mathop{x}}\,){{\left( I(t,\overset{\_}{\mathop{x}}\,) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,\overset{\_}{\mathop{x}}\,) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}){{\left( I({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}) \right)}^{\beta -1}}]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}) \right)}^{\beta }} -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },\bar{x}_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },\bar{x}_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },\bar{x}_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Ri}^{\prime \prime },\bar{x}_{i}^{\prime \prime }) \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage General Log-Linear - Lognormal==&lt;br /&gt;
Given &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; time-varying stresses &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}}(t),{{X}_{2}}(t)...{{X}_{n}}(t))\,\!&amp;lt;/math&amp;gt;, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,\bar{x})}=s(t,\overset{\_}{\mathop{x}}\,)={{e}^{^{^{-{{a}_{0}}-\overset{n}{\mathop{\underset{j=1}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(t)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
&lt;br /&gt;
The lognormal reliability function of the unit under multiple stresses is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\bar{x})=1-\Phi (z(t,\bar{x}))\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,\bar{x})=\frac{\ln I(t,\bar{x})}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\bar{x})=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{du}{{{e}^{^{{{\alpha }_{0}}+\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(u)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\bar{x})=\frac{s(t,\bar{x})\varphi (z(t,\bar{x}))}{\sigma _{T}^{\prime }I(t,\bar{x})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{{\bar{x}}}_{i}})\varphi (z({{T}_{i}},{{{\bar{x}}}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{{\bar{x}}}_{i}})}] \overset{S}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },\bar{x}_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },\bar{x}_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },\bar{x}_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Confidence Intervals=&lt;br /&gt;
Using the same methodology as in previous sections, approximate confidence intervals can be derived and applied to all results of interest using the Fisher Matrix approach discussed in [[Appendix A: Brief Statistical Background|Appendix A]]. ALTA utilizes such intervals on all results.&lt;br /&gt;
&lt;br /&gt;
=Notes on Trigonometric Functions=&lt;br /&gt;
Trigonometric functions sometime are used in accelerated life tests. However ALTA does not include them. In fact, a trigonometric function can be defined by its frequency and magnitude. Frequency and magnitude then can be treated as two constant stresses. The GLL model discussed in [[General Log-Linear Relationship]] then can be applied for modeling.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Time-Varying_Stress_Models&amp;diff=64902</id>
		<title>Time-Varying Stress Models</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Time-Varying_Stress_Models&amp;diff=64902"/>
		<updated>2017-02-01T23:57:36Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Cumulative Damage Power - Weibull */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|10}}&lt;br /&gt;
Traditionally, accelerated tests that use a time-varying stress application have been used to assure failures quickly. This is highly desirable given the pressure on industry today to shorten new product introduction time. The most basic type of time-varying stress test is a step-stress test. In step-stress accelerated testing, the test units are subjected to successively higher stress levels in predetermined stages, and thus follow a time-varying stress profile. The units usually start at a lower stress level and at a predetermined time, or failure number, the stress is increased and the test continues. The test is terminated when all units have failed, when a certain number of failures are observed or when a certain time has elapsed. Step-stress testing can substantially shorten the reliability test&#039;s duration. In addition to step-stress testing, there are many other types of time-varying stress profiles that can be used in accelerated life testing. However, it should be noted that there is more uncertainty in the results from such time-varying stress tests than from traditional constant stress tests of the same length and sample size.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When dealing with data from accelerated tests with time-varying stresses, the life-stress relationship must take into account the cumulative effect of the applied stresses. Such a model is commonly referred to as a &#039;&#039;cumulative damage&#039;&#039; or &#039;&#039;cumulative exposure&#039;&#039; model. Nelson [[Appendix_E:_References|[28]]] defines and presents the derivation and assumptions of such a model. ALTA includes the cumulative damage model for the analysis of time-varying stress data. This section presents an introduction to the model formulation and its application.&lt;br /&gt;
&lt;br /&gt;
=Model Formulation=&lt;br /&gt;
To formulate the cumulative exposure/damage model, consider a simple step-stress experiment where an electronic component was subjected to a voltage stress, starting at 2V (use stress level) and increased to 7V in stepwise increments, as shown in the next figure. The following steps, in hours, were used to apply stress to the products under test: 0 to 250, 2V; 250 to 350, 3V; 350 to 370, 4V; 370 to 380, 5V; 380 to 390, 6V; and 390 to 400, 7V.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA12.1.gif|center|550px|Step profile for a simple voltage stress test.]]&lt;br /&gt;
&lt;br /&gt;
In this example, 11 units were available for the test. All units were tested using this same stress profile. Units that failed were removed from the test and their total times on test were recorded. The following times-to-failure were observed in the test, in hours: 280, 310, 330, 352, 360, 366, 371, 374, 378, 381 and 385. The first failure in this test occurred at 280 hours when the stress was 3V. During the test, this unit experienced a period of time at 2V before failing at 3V. If the stress were 2V, one would expect the unit to fail at a time later than 280 hours, while if the unit were always at 3V, one would expect that failure time to be sooner than 280 hrs. The problem faced by the analyst in this case is to determine some equivalency between the stresses. In other words, what is the equivalent of 280 hours (with 250 hours spent at 2V and 30 hours spent at 3V) at a constant 2V stress or at a constant 3V stress?&lt;br /&gt;
&lt;br /&gt;
==Mathematical Formulation for a Step-Stress Model==&lt;br /&gt;
To mathematically formulate the model, consider the step-stress test shown in the next figure, with stresses S1, S2 and S3. Furthermore, assume that the underlying life distribution is the Weibull distribution, and also assume an inverse power law relationship between the Weibull scale parameter and the applied stress.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA12.2.png|center|300px|Step-stress profile and the corresponding life distributions.]]&lt;br /&gt;
&lt;br /&gt;
From the inverse power law relationship, the scale parameter, &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, of the Weibull distribution can be expressed as an inverse power function of the stress, &amp;lt;math&amp;gt;V \,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\eta(V)=\frac{1}{{{K}{V}}^\eta } \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
The fraction of the units failing by time &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; under a constant stress &amp;lt;math&amp;gt;V = {{S}_{1}}\,\!&amp;lt;/math&amp;gt;, is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
F(t;V)=1-R(t;V)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t;V)={{e}^{-{{\left[ \tfrac{t}{\eta (V)} \right]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;cdf&#039;&#039; for each constant stress level is: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{F}_{1}}(t;{{S}_{1}})= &amp;amp; 1-{{e}^{-{{(KS_{1}^{n}t)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{F}_{2}}(t;{{S}_{2}})= &amp;amp; 1-{{e}^{-{{(KS_{2}^{n}t)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{F}_{3}}(t;{{S}_{3}})= &amp;amp; 1-{{e}^{-{{(KS_{3}^{n}t)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above equations would suffice if the units did not experience different stresses during the test, as they did in this case. To analyze the data from this step-stress test, a cumulative exposure model is needed. Such a model will relate the life distribution, in this case the Weibull distribution, of the units at one stress level to the distribution at the next stress level. In formulating this model, it is assumed that the remaining life of the test units depends only on the cumulative exposure the units have seen and that the units do not remember how such exposure was accumulated. Moreover, since the units are held at a constant stress at each step, the surviving units will fail according to the distribution at the current step, but with a starting age corresponding to the total accumulated time up to the beginning of the current step. This model can be formulated as follows:&lt;br /&gt;
&lt;br /&gt;
*Units failing during the first step have not experienced any other stresses and will fail according to the &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &#039;&#039;cdf&#039;&#039;. Units that made it to the second step will fail according to the &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &#039;&#039;cdf&#039;&#039;, but will have accumulated some equivalent age, &amp;lt;math&amp;gt;{{\varepsilon }_{1}},\,\!&amp;lt;/math&amp;gt; at this stress level (given the fact that they have spent &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; hours at &amp;lt;math&amp;gt;{{S}_{1}})\,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}(t;{{S}_{2}})=1-{{e}^{-{{[KS_{2}^{n}((t-{{t}_{1}})+{{\varepsilon }_{1}})]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
In other words, the probability that the units will fail at a time, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, while at &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; and between &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{t}_{2}}\,\!&amp;lt;/math&amp;gt; is equivalent to the probability that the units would fail after accumulating &amp;lt;math&amp;gt;(t-{{t}_{1}})\,\!&amp;lt;/math&amp;gt; plus some equivalent time, &amp;lt;math&amp;gt;{{\varepsilon }_{1}},\,\!&amp;lt;/math&amp;gt; to account for the exposure the units have seen at &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*The equivalent time, &amp;lt;math&amp;gt;{{\varepsilon }_{1}},\,\!&amp;lt;/math&amp;gt; will be the time by which the probability of failure at &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; is equal to the probability of failure at &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; after an exposure of &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
	  {{F}_{1}}({{t}_{1}};{{S}_{1}})=\ &amp;amp; {{F}_{2}}({{\varepsilon }_{1}},{{S}_{2}}) \\ &lt;br /&gt;
	 1-{{e}^{-{{(KS_{1}^{n}{{t}_{1}})}^{\beta }}}}=\ &amp;amp; 1-{{e}^{-{{(KS_{2}^{n}{{\varepsilon }_{1}})}^{\beta }}}} \\ &lt;br /&gt;
	 S_{1}^{n}{{t}_{1}}=\ &amp;amp; S_{2}^{n}{{\varepsilon }_{1}} \\ &lt;br /&gt;
	 {{\varepsilon }_{1}}=\ &amp;amp; {{t}_{1}}{{\left( \frac{{{S}_{1}}}{{{S}_{2}}} \right)}^{n}}  &lt;br /&gt;
	\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
*One would repeat this for step 3 taking into account the accumulated exposure during steps 1 and 2, or in more general terms and for the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; step: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{i}}(t;{{S}_{i}})=1-{{e}^{-{{[KS_{i}^{n}((t-{{t}_{i-1}})+{{\varepsilon }_{i-1}})]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\varepsilon }_{i-1}}=({{t}_{i-1}}-{{t}_{i-2}}+{{\varepsilon }_{i-2}}){{\left( \frac{{{S}_{i-1}}}{{{S}_{i}}} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
*Once the &#039;&#039;cdf&#039;&#039; for each step has been obtained, the  &#039;&#039;pdf&#039;&#039;  can also then be determined utilizing: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{f}_{i}}(t,{{S}_{i}})=-\frac{d}{dt}\left[ {{F}_{i}}(t,{{S}_{i}}) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
Once the model has been formulated, model parameters (i.e., &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;) can be computed utilizing maximum likelihood estimation methods.&lt;br /&gt;
&lt;br /&gt;
The previous example can be expanded for any time-varying stress. ALTA allows you to define any stress profile. For example, the stress can be a ramp stress, a monotonically increasing stress, sinusoidal, etc. This section presents a generalized formulation of the cumulative damage model, where stress can be any function of time.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;noprint&amp;quot;&amp;gt;&lt;br /&gt;
{{Example:CD-GLL_Weibull}}&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage Power Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where stress can be any function of time and the life-stress relationship is based on the power relationship. Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship,  the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))={{\left( \frac{a}{x(t)} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In ALTA, the above relationship is actually presented in a format consistent with the general log-linear (GLL) relationship for the power law relationship:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))={{e}^{{{\alpha }_{0}}+{{\alpha }_{1}}\ln \left( x(t) \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, instead of displaying &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; as the calculated parameters, the following reparameterization is used:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 {{\alpha }_{0}}=\ &amp;amp; \ln ({{a}^{n}}) \\ &lt;br /&gt;
 {{\alpha }_{1}}=\ &amp;amp; -n  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Power - Exponential==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship, the mean life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m(t,\,x)}=s(t,\,x)={{\left( \frac{x(t)}{a} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\,x(t))={{e}^{-I(t,\,x)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}_{}^{}}}\,}}\,{{\left( \frac{x(u)}{a} \right)}^{n}}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\,x)=s(t,\,x){{e}^{-I(t,\,x)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest (e.g., mean life, failure rate, etc.) can be obtained utilizing the statistical properties definitions presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},\,{{x}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},\,{{x}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },\,x_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },\,x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },\,x_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },\,x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },\,x_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
 &lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
 &lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
 &lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Power - Weibull==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship, the characteristic life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta (t,x)}=s(t,x)={{\left( \frac{x(t)}{a} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,x(t))={{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,{{\left( \frac{x(u)}{a} \right)}^{n}}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\beta s(t,x){{\left( I(t,x) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{x}_{i}}){{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta -1}}]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta }} -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt;  is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cumulative Damage-Power-Weibull Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Using the simple step-stress data given [[Time-Varying Stress Models#Model Formulation|here]], one would define &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt;  as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 x(t)=\ &amp;amp; 2,\text{    }0&amp;lt;t\le 250 \\ &lt;br /&gt;
 =\ &amp;amp; 3,\text{    }250&amp;lt;t\le 350 \\ &lt;br /&gt;
 =\ &amp;amp; 4,\text{    }350&amp;lt;t\le 370 \\ &lt;br /&gt;
 =\ &amp;amp; 5,\text{    }370&amp;lt;t\le 380 \\ &lt;br /&gt;
 =\ &amp;amp; 6,\text{    }380&amp;lt;t\le 390 \\ &lt;br /&gt;
 =\ &amp;amp; 7,\text{    }390&amp;lt;t\le +\infty   &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assuming a power relation as the underlying life-stress relationship and the Weibull distribution as the underlying life distribution, one can then formulate the log-likelihood function for the above data set as,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L) = \Lambda =\overset{F}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,\ln \left\{ \beta {{\left[ \frac{x(t)}{a} \right]}^{n}}{{\left[ \int_{0}^{{{t}_{i}}}{{\left[ \frac{\left[ x(u) \right]}{a} \right]}^{n}}du \right]}^{\beta -1}} \right\} -\overset{F}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,\left\{ {{\left[ \int_{0}^{{{t}_{i}}}{{\left[ \frac{\left[ x(u) \right]}{a} \right]}^{n}}du \right]}^{\beta }} \right\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; is the number of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are the IPL parameters.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; is the stress profile function.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure.&lt;br /&gt;
&lt;br /&gt;
The parameter estimates for &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{n}\,\!&amp;lt;/math&amp;gt; can be obtained by simultaneously solving, &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial a}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt;. Using ALTA, the parameter estimates for this data set are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \widehat{\beta }=\ &amp;amp; 2.67829 \\ &lt;br /&gt;
  \widehat{\alpha }=\ &amp;amp; 11.72208 \\ &lt;br /&gt;
  \widehat{n}=\ &amp;amp; 3.998466  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the parameters are obtained, one can now determine the reliability for these units at any time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; and stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R\left( t,x\left( t \right) \right)={{e}^{-{{\left[ \int_{0}^{t}{{\left[ \tfrac{x(u)}{a} \right]}^{n}}du \right]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or at a fixed stress level &amp;lt;math&amp;gt;x(t)=2 \text{ V}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;t=300 \text{ hours}\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R\left( t=300,x(t)=2 \right)={{e}^{-{{\left[ \int_{0}^{t}{{\left[ \tfrac{x(u)}{a} \right]}^{n}}du \right]}^{\beta }}}}=97.5%\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The mean time to failure &amp;lt;math&amp;gt;(MTTF)\,\!&amp;lt;/math&amp;gt; at any stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; can be determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MTTF\left( x\left( t \right) \right)=\int_{0}^{\infty }t\left[ \left\{ \beta {{\left[ \frac{x\left( t \right)}{a} \right]}^{n}}{{\left[ \int_{0}^{t}{{\left[ \frac{x\left( u \right)}{a} \right]}^{n}}du \right]}^{\beta -1}} \right\}{{e}^{-{{\left[ \int_{0}^{t}{{\left[ \tfrac{x(u)}{a} \right]}^{n}}du \right]}^{\beta }}}} \right]dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or at a fixed stress level &amp;lt;math&amp;gt;x\left( t \right)=2 \text{ V}\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MTTF\left( x\left( t \right) \right)=1046.3 \text{ hours}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any other metric of interest (e.g., failure rate, conditional reliability etc.) can also be determined using the basic definitions given in [[Appendix A: Brief Statistical Background|Appendix A]] and calculated automatically with ALTA.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Power - Lognormal==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship, the median life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,x)}=s(t,x)={{\left( \frac{x(t)}{a} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))=1-\Phi (z)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,x)=\frac{\ln I(t,x)}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,{{\left( \frac{x(u)}{a} \right)}^{n}}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\frac{s(t,x)\varphi (z(t,x))}{\sigma _{T}^{\prime }I(t,x)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{x}_{i}})\varphi (z({{T}_{i}},{{x}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{x}_{i}})}] \overset{S}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },x_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the    interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage Arrhenius Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where stress can be any function of time and the life-stress relationship is based on the Arrhenius life-stress relationship. Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))=C{{e}^{\tfrac{b}{x(t)}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In ALTA, the above relationship is actually presented in a format consistent with the general log-linear (GLL) relationship for the Arrhenius relationship:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))={{e}^{{{\alpha }_{0}}+{{\alpha }_{1}}\tfrac{1}{x(t)}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, instead of displaying &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; as the calculated parameters, the following reparameterization is used:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 {{\alpha }_{0}}=\ &amp;amp; \ln (C) \\ &lt;br /&gt;
 {{\alpha }_{1}}=\ &amp;amp; b  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Arrhenius - Exponential==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the mean life is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m(t,x)}=s(t,x)=\frac{{{e}^{\tfrac{-b}{x(t)}}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))={{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{\tfrac{-b}{x(u)}}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
f(t,x)=s(t,x){{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},{{x}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},{{x}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Arrhenius - Weibull==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the characteristic life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta (t,x)}=s(t,x)=\frac{{{e}^{\tfrac{-b}{x(t)}}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,x(t))={{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{{{e}^{\tfrac{-b}{x(u)}}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\beta s(t,x){{\left( I(t,x) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{x}_{i}}){{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta -1}}]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta }} -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{(I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime }))}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Arrhenius - Lognormal==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the median life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,x)}=s(t,x)=\frac{{{e}^{\tfrac{-b}{x(t)}}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))=1-\Phi (z)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,x)=\frac{\ln I(t,x)}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{\tfrac{-b}{x(u)}}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\frac{s(t,x)\varphi (z(t,x))}{\sigma _{T}^{\prime }I(t,x)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{x}_{i}})\varphi (z({{T}_{i}},{{x}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{x}_{i}})}] \overset{S}{\mathop{+\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },x_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage Exponential Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where stress can be any function of time and the life-stress relationship is based on the exponential relationship. Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential relationship, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
L(x(t))=C{{e}^{bx(t)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
In ALTA, the above relationship is actually presented in a format consistent with the general log-linear (GLL) relationship for the exponential relationship:&lt;br /&gt;
&lt;br /&gt;
Therefore, instead of displaying &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; as the calculated parameters, the following reparameterization is used:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 {{\alpha }_{0}}=\ &amp;amp; \ln (C) \\ &lt;br /&gt;
 {{\alpha }_{1}}=\ &amp;amp; b  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Exponential - Exponential==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential life-stress relationship, the mean life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m(t,x)}=s(t,x)=\frac{{{e}^{-bx(t)}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))={{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{-bx(u)}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
f(t,x)=s(t,x){{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},{{x}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},{{x}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Exponential - Weibull==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential life-stress relationship, the characteristic life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta (t,x)}=s(t,x)=\frac{{{e}^{-b\cdot x(t)}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,x(t))={{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{-bx(u)}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\beta s(t,x){{\left( I(t,x) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{x}_{i}}){{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta -1}}] -\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta }}-\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Exponential - Lognormal==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential life-stress relationship, the median life is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,x)}=s(t,x)=\frac{{{e}^{-bx(t)}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))=1-\Phi (z)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,x)=\frac{\ln I(t,x)}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{{{e}^{-bx(u)}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\frac{s(t,x)\varphi (z(t,x))}{\sigma _{T}^{\prime }I(t,x)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{x}_{i}})\varphi (z({{T}_{i}},{{x}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{x}_{i}})}] \overset{S}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },x_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage General Log-Linear Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where multiple stress types are used in the analysis and where the stresses can be any function of time.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage General Log-Linear - Exponential==&lt;br /&gt;
Given &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; time-varying stresses &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}}(t),{{X}_{2}}(t)...{{X}_{n}}(t))\,\!&amp;lt;/math&amp;gt;, the life-stress relationship is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m\left( t,\overset{\_}{\mathop{x}}\, \right)}=s(t,\overset{\_}{\mathop{x}}\,)={{e}^{-{{a}_{0}}-\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{a}_{j}}{{x}_{j}}(t)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
where &amp;lt;math&amp;gt;{{\alpha }_{0}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
This relationship can be further modified through the use of transformations and can be reduced to the relationships discussed previously (power, Arrhenius and exponential), if so desired.&lt;br /&gt;
The exponential reliability function of the unit under multiple stresses is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\overset{\_}{\mathop{x}}\,)={{e}^{-I(t,\overset{\_}{\mathop{x}}\,)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\overset{\_}{\mathop{x}}\,)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{du}{{{e}^{^{^{{{\alpha }_{0}}+\overset{n}{\mathop{\underset{j=1}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(t)}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\overset{\_}{\mathop{x}}\,)=s(t,\overset{\_}{\mathop{x}}\,){{e}^{-I(t,\overset{\_}{\mathop{x}}\,)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage General Log-Linear - Weibull==&lt;br /&gt;
Given &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; time-varying stresses &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}}(t),{{X}_{2}}(t)...{{X}_{n}}(t))\,\!&amp;lt;/math&amp;gt;, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta \left( t,\overset{\_}{\mathop{x}}\, \right)}=s(t,\overset{\_}{\mathop{x}}\,)={{e}^{^{^{-{{a}_{0}}-\overset{n}{\mathop{\underset{j=1}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(t)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
&lt;br /&gt;
The Weibull reliability function of the unit under multiple stresses is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\overset{\_}{\mathop{x}}\,)={{e}^{-{{\left( I(t,\overset{\_}{\mathop{x}}\,) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\overset{\_}{\mathop{x}}\,)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{du}{{{e}^{^{{{a}_{0}}+\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(u)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\overset{\_}{\mathop{x}}\,)=\beta s(t,\overset{\_}{\mathop{x}}\,){{\left( I(t,\overset{\_}{\mathop{x}}\,) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,\overset{\_}{\mathop{x}}\,) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}){{\left( I({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}) \right)}^{\beta -1}}]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}) \right)}^{\beta }} -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },\bar{x}_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },\bar{x}_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },\bar{x}_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Ri}^{\prime \prime },\bar{x}_{i}^{\prime \prime }) \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage General Log-Linear - Lognormal==&lt;br /&gt;
Given &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; time-varying stresses &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}}(t),{{X}_{2}}(t)...{{X}_{n}}(t))\,\!&amp;lt;/math&amp;gt;, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,\bar{x})}=s(t,\overset{\_}{\mathop{x}}\,)={{e}^{^{^{-{{a}_{0}}-\overset{n}{\mathop{\underset{j=1}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(t)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
&lt;br /&gt;
The lognormal reliability function of the unit under multiple stresses is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\bar{x})=1-\Phi (z(t,\bar{x}))\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,\bar{x})=\frac{\ln I(t,\bar{x})}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\bar{x})=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{du}{{{e}^{^{{{\alpha }_{0}}+\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(u)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\bar{x})=\frac{s(t,\bar{x})\varphi (z(t,\bar{x}))}{\sigma _{T}^{\prime }I(t,\bar{x})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{{\bar{x}}}_{i}})\varphi (z({{T}_{i}},{{{\bar{x}}}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{{\bar{x}}}_{i}})}] \overset{S}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },\bar{x}_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },\bar{x}_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },\bar{x}_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Confidence Intervals=&lt;br /&gt;
Using the same methodology as in previous sections, approximate confidence intervals can be derived and applied to all results of interest using the Fisher Matrix approach discussed in [[Appendix A: Brief Statistical Background|Appendix A]]. ALTA utilizes such intervals on all results.&lt;br /&gt;
&lt;br /&gt;
=Notes on Trigonometric Functions=&lt;br /&gt;
Trigonometric functions sometime are used in accelerated life tests. However ALTA does not include them. In fact, a trigonometric function can be defined by its frequency and magnitude. Frequency and magnitude then can be treated as two constant stresses. The GLL model discussed in [[General Log-Linear Relationship]] then can be applied for modeling.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Time-Varying_Stress_Models&amp;diff=64901</id>
		<title>Time-Varying Stress Models</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Time-Varying_Stress_Models&amp;diff=64901"/>
		<updated>2017-02-01T23:37:19Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* Mathematical Formulation for a Step-Stress Model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:ALTABOOK|10}}&lt;br /&gt;
Traditionally, accelerated tests that use a time-varying stress application have been used to assure failures quickly. This is highly desirable given the pressure on industry today to shorten new product introduction time. The most basic type of time-varying stress test is a step-stress test. In step-stress accelerated testing, the test units are subjected to successively higher stress levels in predetermined stages, and thus follow a time-varying stress profile. The units usually start at a lower stress level and at a predetermined time, or failure number, the stress is increased and the test continues. The test is terminated when all units have failed, when a certain number of failures are observed or when a certain time has elapsed. Step-stress testing can substantially shorten the reliability test&#039;s duration. In addition to step-stress testing, there are many other types of time-varying stress profiles that can be used in accelerated life testing. However, it should be noted that there is more uncertainty in the results from such time-varying stress tests than from traditional constant stress tests of the same length and sample size.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When dealing with data from accelerated tests with time-varying stresses, the life-stress relationship must take into account the cumulative effect of the applied stresses. Such a model is commonly referred to as a &#039;&#039;cumulative damage&#039;&#039; or &#039;&#039;cumulative exposure&#039;&#039; model. Nelson [[Appendix_E:_References|[28]]] defines and presents the derivation and assumptions of such a model. ALTA includes the cumulative damage model for the analysis of time-varying stress data. This section presents an introduction to the model formulation and its application.&lt;br /&gt;
&lt;br /&gt;
=Model Formulation=&lt;br /&gt;
To formulate the cumulative exposure/damage model, consider a simple step-stress experiment where an electronic component was subjected to a voltage stress, starting at 2V (use stress level) and increased to 7V in stepwise increments, as shown in the next figure. The following steps, in hours, were used to apply stress to the products under test: 0 to 250, 2V; 250 to 350, 3V; 350 to 370, 4V; 370 to 380, 5V; 380 to 390, 6V; and 390 to 400, 7V.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA12.1.gif|center|550px|Step profile for a simple voltage stress test.]]&lt;br /&gt;
&lt;br /&gt;
In this example, 11 units were available for the test. All units were tested using this same stress profile. Units that failed were removed from the test and their total times on test were recorded. The following times-to-failure were observed in the test, in hours: 280, 310, 330, 352, 360, 366, 371, 374, 378, 381 and 385. The first failure in this test occurred at 280 hours when the stress was 3V. During the test, this unit experienced a period of time at 2V before failing at 3V. If the stress were 2V, one would expect the unit to fail at a time later than 280 hours, while if the unit were always at 3V, one would expect that failure time to be sooner than 280 hrs. The problem faced by the analyst in this case is to determine some equivalency between the stresses. In other words, what is the equivalent of 280 hours (with 250 hours spent at 2V and 30 hours spent at 3V) at a constant 2V stress or at a constant 3V stress?&lt;br /&gt;
&lt;br /&gt;
==Mathematical Formulation for a Step-Stress Model==&lt;br /&gt;
To mathematically formulate the model, consider the step-stress test shown in the next figure, with stresses S1, S2 and S3. Furthermore, assume that the underlying life distribution is the Weibull distribution, and also assume an inverse power law relationship between the Weibull scale parameter and the applied stress.&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA12.2.png|center|300px|Step-stress profile and the corresponding life distributions.]]&lt;br /&gt;
&lt;br /&gt;
From the inverse power law relationship, the scale parameter, &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt;, of the Weibull distribution can be expressed as an inverse power function of the stress, &amp;lt;math&amp;gt;V \,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\eta(V)=\frac{1}{{{K}{V}}^\eta } \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
The fraction of the units failing by time &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; under a constant stress &amp;lt;math&amp;gt;V = {{S}_{1}}\,\!&amp;lt;/math&amp;gt;, is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
F(t;V)=1-R(t;V)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t;V)={{e}^{-{{\left[ \tfrac{t}{\eta (V)} \right]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;cdf&#039;&#039; for each constant stress level is: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; {{F}_{1}}(t;{{S}_{1}})= &amp;amp; 1-{{e}^{-{{(KS_{1}^{n}t)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{F}_{2}}(t;{{S}_{2}})= &amp;amp; 1-{{e}^{-{{(KS_{2}^{n}t)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; {{F}_{3}}(t;{{S}_{3}})= &amp;amp; 1-{{e}^{-{{(KS_{3}^{n}t)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above equations would suffice if the units did not experience different stresses during the test, as they did in this case. To analyze the data from this step-stress test, a cumulative exposure model is needed. Such a model will relate the life distribution, in this case the Weibull distribution, of the units at one stress level to the distribution at the next stress level. In formulating this model, it is assumed that the remaining life of the test units depends only on the cumulative exposure the units have seen and that the units do not remember how such exposure was accumulated. Moreover, since the units are held at a constant stress at each step, the surviving units will fail according to the distribution at the current step, but with a starting age corresponding to the total accumulated time up to the beginning of the current step. This model can be formulated as follows:&lt;br /&gt;
&lt;br /&gt;
*Units failing during the first step have not experienced any other stresses and will fail according to the &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; &#039;&#039;cdf&#039;&#039;. Units that made it to the second step will fail according to the &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; &#039;&#039;cdf&#039;&#039;, but will have accumulated some equivalent age, &amp;lt;math&amp;gt;{{\varepsilon }_{1}},\,\!&amp;lt;/math&amp;gt; at this stress level (given the fact that they have spent &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; hours at &amp;lt;math&amp;gt;{{S}_{1}})\,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{2}}(t;{{S}_{2}})=1-{{e}^{-{{[KS_{2}^{n}((t-{{t}_{1}})+{{\varepsilon }_{1}})]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
In other words, the probability that the units will fail at a time, &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;, while at &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; and between &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{t}_{2}}\,\!&amp;lt;/math&amp;gt; is equivalent to the probability that the units would fail after accumulating &amp;lt;math&amp;gt;(t-{{t}_{1}})\,\!&amp;lt;/math&amp;gt; plus some equivalent time, &amp;lt;math&amp;gt;{{\varepsilon }_{1}},\,\!&amp;lt;/math&amp;gt; to account for the exposure the units have seen at &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*The equivalent time, &amp;lt;math&amp;gt;{{\varepsilon }_{1}},\,\!&amp;lt;/math&amp;gt; will be the time by which the probability of failure at &amp;lt;math&amp;gt;{{S}_{2}}\,\!&amp;lt;/math&amp;gt; is equal to the probability of failure at &amp;lt;math&amp;gt;{{S}_{1}}\,\!&amp;lt;/math&amp;gt; after an exposure of &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
	  {{F}_{1}}({{t}_{1}};{{S}_{1}})=\ &amp;amp; {{F}_{2}}({{\varepsilon }_{1}},{{S}_{2}}) \\ &lt;br /&gt;
	 1-{{e}^{-{{(KS_{1}^{n}{{t}_{1}})}^{\beta }}}}=\ &amp;amp; 1-{{e}^{-{{(KS_{2}^{n}{{\varepsilon }_{1}})}^{\beta }}}} \\ &lt;br /&gt;
	 S_{1}^{n}{{t}_{1}}=\ &amp;amp; S_{2}^{n}{{\varepsilon }_{1}} \\ &lt;br /&gt;
	 {{\varepsilon }_{1}}=\ &amp;amp; {{t}_{1}}{{\left( \frac{{{S}_{1}}}{{{S}_{2}}} \right)}^{n}}  &lt;br /&gt;
	\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
*One would repeat this for step 3 taking into account the accumulated exposure during steps 1 and 2, or in more general terms and for the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; step: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{F}_{i}}(t;{{S}_{i}})=1-{{e}^{-{{[KS_{i}^{n}((t-{{t}_{i-1}})+{{\varepsilon }_{i-1}})]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
	&lt;br /&gt;
::&amp;lt;math&amp;gt;{{\varepsilon }_{i-1}}=({{t}_{i-1}}-{{t}_{i-2}}+{{\varepsilon }_{i-2}}){{\left( \frac{{{S}_{i-1}}}{{{S}_{i}}} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
*Once the &#039;&#039;cdf&#039;&#039; for each step has been obtained, the  &#039;&#039;pdf&#039;&#039;  can also then be determined utilizing: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;{{f}_{i}}(t,{{S}_{i}})=-\frac{d}{dt}\left[ {{F}_{i}}(t,{{S}_{i}}) \right]\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
Once the model has been formulated, model parameters (i.e., &amp;lt;math&amp;gt;K\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;) can be computed utilizing maximum likelihood estimation methods.&lt;br /&gt;
&lt;br /&gt;
The previous example can be expanded for any time-varying stress. ALTA allows you to define any stress profile. For example, the stress can be a ramp stress, a monotonically increasing stress, sinusoidal, etc. This section presents a generalized formulation of the cumulative damage model, where stress can be any function of time.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;noprint&amp;quot;&amp;gt;&lt;br /&gt;
{{Example:CD-GLL_Weibull}}&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage Power Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where stress can be any function of time and the life-stress relationship is based on the power relationship. Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship,  the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))={{\left( \frac{a}{x(t)} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In ALTA, the above relationship is actually presented in a format consistent with the general log-linear (GLL) relationship for the power law relationship:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))={{e}^{{{\alpha }_{0}}+{{\alpha }_{1}}\ln \left( x(t) \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, instead of displaying &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; as the calculated parameters, the following reparameterization is used:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 {{\alpha }_{0}}=\ &amp;amp; \ln ({{a}^{n}}) \\ &lt;br /&gt;
 {{\alpha }_{1}}=\ &amp;amp; -n  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Power - Exponential==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship, the mean life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m(t,\,x)}=s(t,\,x)={{\left( \frac{x(t)}{a} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\,x(t))={{e}^{-I(t,\,x)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}_{}^{}}}\,}}\,{{\left( \frac{x(u)}{a} \right)}^{n}}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\,x)=s(t,\,x){{e}^{-I(t,\,x)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest (e.g., mean life, failure rate, etc.) can be obtained utilizing the statistical properties definitions presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},\,{{x}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},\,{{x}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },\,x_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },\,x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },\,x_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },\,x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },\,x_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
 &lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
 &lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
 &lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Power - Weibull==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship, the characteristic life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta (t,x)}=s(t,x)={{\left( \frac{x(t)}{a} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,x(t))={{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,{{\left( \frac{x(u)}{a} \right)}^{n}}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\beta s(t,x){{\left( I(t,x) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{x}_{i}}){{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta -1}}]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta }} -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt;  is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cumulative Damage-Power-Weibull Example&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Using the simple step-stress data given [[Time-Varying Stress Models#Model Formulation|here]], one would define &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt;  as: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 x(t)=\ &amp;amp; 2,\text{    }0&amp;lt;t\le 250 \\ &lt;br /&gt;
 =\ &amp;amp; 3,\text{    }250&amp;lt;t\le 350 \\ &lt;br /&gt;
 =\ &amp;amp; 4,\text{    }350&amp;lt;t\le 370 \\ &lt;br /&gt;
 =\ &amp;amp; 5,\text{    }370&amp;lt;t\le 380 \\ &lt;br /&gt;
 =\ &amp;amp; 6,\text{    }380&amp;lt;t\le 390 \\ &lt;br /&gt;
 =\ &amp;amp; 7,\text{    }390&amp;lt;t\le +\infty   &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assuming a power relation as the underlying life-stress relationship and the Weibull distribution as the underlying life distribution, one can then formulate the log-likelihood function for the above data set as,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{F}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,\ln \left\{ \beta {{\left[ \frac{x(t)}{a} \right]}^{n}}{{\left[ \int_{0}^{{{t}_{i}}}{{\left[ \frac{\left[ x(u) \right]}{a} \right]}^{n}}du \right]}^{\beta -1}} \right\} -\overset{F}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,\left\{ {{\left[ \int_{0}^{{{t}_{i}}}{{\left[ \frac{\left[ x(u) \right]}{a} \right]}^{n}}du \right]}^{\beta }} \right\}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;F\,\!&amp;lt;/math&amp;gt; is the number of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the Weibull shape parameter.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; are the IPL parameters.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; is the stress profile function.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{t}_{i}}\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time to failure.&lt;br /&gt;
&lt;br /&gt;
The parameter estimates for &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\hat{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{n}\,\!&amp;lt;/math&amp;gt; can be obtained by simultaneously solving, &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial a}=0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\tfrac{\partial \Lambda }{\partial n}=0\,\!&amp;lt;/math&amp;gt;. Using ALTA, the parameter estimates for this data set are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \widehat{\beta }=\ &amp;amp; 2.67829 \\ &lt;br /&gt;
  \widehat{\alpha }=\ &amp;amp; 11.72208 \\ &lt;br /&gt;
  \widehat{n}=\ &amp;amp; 3.998466  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the parameters are obtained, one can now determine the reliability for these units at any time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; and stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; from:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R\left( t,x\left( t \right) \right)={{e}^{-{{\left[ \int_{0}^{t}{{\left[ \tfrac{x(u)}{a} \right]}^{n}}du \right]}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or at a fixed stress level &amp;lt;math&amp;gt;x(t)=2V\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;t=300\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R\left( t=300,x(t)=2 \right)={{e}^{-{{\left[ \int_{0}^{t}{{\left[ \tfrac{x(u)}{a} \right]}^{n}}du \right]}^{\beta }}}}=97.5%\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The mean time to failure (MTTF) at any stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; can be determined by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MTTF\left( x\left( t \right) \right)=\int_{0}^{\infty }t\left[ \left\{ \beta {{\left[ \frac{x\left( t \right)}{a} \right]}^{n}}{{\left[ \int_{0}^{t}{{\left[ \frac{x\left( u \right)}{a} \right]}^{n}}du \right]}^{\beta -1}} \right\}{{e}^{-{{\left[ \int_{0}^{t}{{\left[ \tfrac{x(u)}{a} \right]}^{n}}du \right]}^{\beta }}}} \right]dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or at a fixed stress level &amp;lt;math&amp;gt;x\left( t \right)=2V\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;MTTF\left( x\left( t \right) \right)=1046.3hrs\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any other metric of interest (e.g., failure rate, conditional reliability etc.) can also be determined using the basic definitions given in [[Appendix A: Brief Statistical Background|Appendix A]] and calculated automatically with ALTA.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Power - Lognormal==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the power law relationship, the median life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,x)}=s(t,x)={{\left( \frac{x(t)}{a} \right)}^{n}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))=1-\Phi (z)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,x)=\frac{\ln I(t,x)}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,{{\left( \frac{x(u)}{a} \right)}^{n}}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\frac{s(t,x)\varphi (z(t,x))}{\sigma _{T}^{\prime }I(t,x)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{x}_{i}})\varphi (z({{T}_{i}},{{x}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{x}_{i}})}] \overset{S}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },x_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the    interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage Arrhenius Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where stress can be any function of time and the life-stress relationship is based on the Arrhenius life-stress relationship. Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))=C{{e}^{\tfrac{b}{x(t)}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In ALTA, the above relationship is actually presented in a format consistent with the general log-linear (GLL) relationship for the Arrhenius relationship:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(x(t))={{e}^{{{\alpha }_{0}}+{{\alpha }_{1}}\tfrac{1}{x(t)}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, instead of displaying &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; as the calculated parameters, the following reparameterization is used:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 {{\alpha }_{0}}=\ &amp;amp; \ln (C) \\ &lt;br /&gt;
 {{\alpha }_{1}}=\ &amp;amp; b  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Arrhenius - Exponential==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the mean life is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m(t,x)}=s(t,x)=\frac{{{e}^{\tfrac{-b}{x(t)}}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))={{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{\tfrac{-b}{x(u)}}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
f(t,x)=s(t,x){{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},{{x}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},{{x}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Arrhenius - Weibull==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the characteristic life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta (t,x)}=s(t,x)=\frac{{{e}^{\tfrac{-b}{x(t)}}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,x(t))={{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{{{e}^{\tfrac{-b}{x(u)}}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\beta s(t,x){{\left( I(t,x) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{x}_{i}}){{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta -1}}]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta }} -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{(I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime }))}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Arrhenius - Lognormal==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the Arrhenius relationship, the median life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,x)}=s(t,x)=\frac{{{e}^{\tfrac{-b}{x(t)}}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))=1-\Phi (z)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,x)=\frac{\ln I(t,x)}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{\tfrac{-b}{x(u)}}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\frac{s(t,x)\varphi (z(t,x))}{\sigma _{T}^{\prime }I(t,x)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows,&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{x}_{i}})\varphi (z({{T}_{i}},{{x}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{x}_{i}})}] \overset{S}{\mathop{+\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },x_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage Exponential Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where stress can be any function of time and the life-stress relationship is based on the exponential relationship. Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential relationship, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
L(x(t))=C{{e}^{bx(t)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
In ALTA, the above relationship is actually presented in a format consistent with the general log-linear (GLL) relationship for the exponential relationship:&lt;br /&gt;
&lt;br /&gt;
Therefore, instead of displaying &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b\,\!&amp;lt;/math&amp;gt; as the calculated parameters, the following reparameterization is used:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 {{\alpha }_{0}}=\ &amp;amp; \ln (C) \\ &lt;br /&gt;
 {{\alpha }_{1}}=\ &amp;amp; b  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Exponential - Exponential==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential life-stress relationship, the mean life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m(t,x)}=s(t,x)=\frac{{{e}^{-bx(t)}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))={{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{-bx(u)}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039;  is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
f(t,x)=s(t,x){{e}^{-I(t,x)}} &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},{{x}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},{{x}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Exponential - Weibull==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential life-stress relationship, the characteristic life is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta (t,x)}=s(t,x)=\frac{{{e}^{-b\cdot x(t)}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,x(t))={{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{{{e}^{-bx(u)}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\beta s(t,x){{\left( I(t,x) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,x) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{x}_{i}}){{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta -1}}] -\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{x}_{i}}) \right)}^{\beta }}-\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },x_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime }) \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage Exponential - Lognormal==&lt;br /&gt;
Given a time-varying stress &amp;lt;math&amp;gt;x(t)\,\!&amp;lt;/math&amp;gt; and assuming the exponential life-stress relationship, the median life is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,x)}=s(t,x)=\frac{{{e}^{-bx(t)}}}{C}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reliability function of the unit under a single stress is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
R(t,x(t))=1-\Phi (z)&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,x)=\frac{\ln I(t,x)}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,x)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{{{e}^{-bx(u)}}}{C}du\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,x)=\frac{s(t,x)\varphi (z(t,x))}{\sigma _{T}^{\prime }I(t,x)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{x}_{i}})\varphi (z({{T}_{i}},{{x}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{x}_{i}})}] \overset{S}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },x_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },x_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Cumulative Damage General Log-Linear Relationship=&lt;br /&gt;
This section presents a generalized formulation of the cumulative damage model where multiple stress types are used in the analysis and where the stresses can be any function of time.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage General Log-Linear - Exponential==&lt;br /&gt;
Given &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; time-varying stresses &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}}(t),{{X}_{2}}(t)...{{X}_{n}}(t))\,\!&amp;lt;/math&amp;gt;, the life-stress relationship is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{m\left( t,\overset{\_}{\mathop{x}}\, \right)}=s(t,\overset{\_}{\mathop{x}}\,)={{e}^{-{{a}_{0}}-\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{a}_{j}}{{x}_{j}}(t)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
    &lt;br /&gt;
where &amp;lt;math&amp;gt;{{\alpha }_{0}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
This relationship can be further modified through the use of transformations and can be reduced to the relationships discussed previously (power, Arrhenius and exponential), if so desired.&lt;br /&gt;
The exponential reliability function of the unit under multiple stresses is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\overset{\_}{\mathop{x}}\,)={{e}^{-I(t,\overset{\_}{\mathop{x}}\,)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\overset{\_}{\mathop{x}}\,)=\underset{0}{\mathop{\overset{t}{\mathop{\int_{}^{}}}\,}}\,\frac{du}{{{e}^{^{^{{{\alpha }_{0}}+\overset{n}{\mathop{\underset{j=1}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(t)}}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\overset{\_}{\mathop{x}}\,)=s(t,\overset{\_}{\mathop{x}}\,){{e}^{-I(t,\overset{\_}{\mathop{x}}\,)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [s({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}})]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\left( I({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}) \right) -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\left( I(T_{i}^{\prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime }) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Li}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })= &amp;amp; {{e}^{-I(T_{Ri}^{\prime \prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime \prime })}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage General Log-Linear - Weibull==&lt;br /&gt;
Given &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; time-varying stresses &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}}(t),{{X}_{2}}(t)...{{X}_{n}}(t))\,\!&amp;lt;/math&amp;gt;, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\eta \left( t,\overset{\_}{\mathop{x}}\, \right)}=s(t,\overset{\_}{\mathop{x}}\,)={{e}^{^{^{-{{a}_{0}}-\overset{n}{\mathop{\underset{j=1}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(t)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
&lt;br /&gt;
The Weibull reliability function of the unit under multiple stresses is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\overset{\_}{\mathop{x}}\,)={{e}^{-{{\left( I(t,\overset{\_}{\mathop{x}}\,) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\overset{\_}{\mathop{x}}\,)=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{du}{{{e}^{^{{{a}_{0}}+\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(u)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\overset{\_}{\mathop{x}}\,)=\beta s(t,\overset{\_}{\mathop{x}}\,){{\left( I(t,\overset{\_}{\mathop{x}}\,) \right)}^{\beta -1}}{{e}^{-{{\left( I(t,\overset{\_}{\mathop{x}}\,) \right)}^{\beta }}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\beta s({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}){{\left( I({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}) \right)}^{\beta -1}}]-\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}{{\left( I({{T}_{i}},{{\overset{\_}{\mathop{x}}\,}_{i}}) \right)}^{\beta }} -\overset{S}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }{{\left( I(T_{i}^{\prime },\overset{\_}{\mathop{x}}\,_{i}^{\prime }) \right)}^{\beta }}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }(T_{Li}^{\prime \prime },\bar{x}_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Li}^{\prime \prime },\bar{x}_{i}^{\prime \prime }) \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }(T_{Ri}^{\prime \prime },\bar{x}_{i}^{\prime \prime })= &amp;amp; {{e}^{-{{\left( I(T_{Ri}^{\prime \prime },\bar{x}_{i}^{\prime \prime }) \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==Cumulative Damage General Log-Linear - Lognormal==&lt;br /&gt;
Given &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; time-varying stresses &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}}(t),{{X}_{2}}(t)...{{X}_{n}}(t))\,\!&amp;lt;/math&amp;gt;, the life-stress relationship is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\frac{1}{\breve{T}(t,\bar{x})}=s(t,\overset{\_}{\mathop{x}}\,)={{e}^{^{^{-{{a}_{0}}-\overset{n}{\mathop{\underset{j=1}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(t)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
&lt;br /&gt;
The lognormal reliability function of the unit under multiple stresses is given by:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;R(t,\bar{x})=1-\Phi (z(t,\bar{x}))\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;z(t,\bar{x})=\frac{\ln I(t,\bar{x})}{\sigma _{T}^{\prime }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;I(t,\bar{x})=\underset{0}{\mathop{\overset{t}{\mathop{\int{}^{}}}\,}}\,\frac{du}{{{e}^{^{{{\alpha }_{0}}+\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{\sum}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}(u)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the &#039;&#039;pdf&#039;&#039; is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;f(t,\bar{x})=\frac{s(t,\bar{x})\varphi (z(t,\bar{x}))}{\sigma _{T}^{\prime }I(t,\bar{x})}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parameter estimation can be accomplished via maximum likelihood estimation methods, and confidence intervals can be approximated using the Fisher matrix approach. Once the parameters are determined, all other characteristics of interest can be obtained utilizing the statistical properties definitions (e.g., mean life, failure rate, etc.) presented in previous chapters. The log-likelihood equation is as follows:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \ln (L)= &amp;amp; \Lambda =\overset{Fe}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,{{N}_{i}}\ln [\frac{s({{T}_{i}},{{{\bar{x}}}_{i}})\varphi (z({{T}_{i}},{{{\bar{x}}}_{i}}))}{\sigma _{T}^{\prime }I({{T}_{i}},{{{\bar{x}}}_{i}})}] \overset{S}{\mathop{\underset{i=1}{\mathop{+\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime }\ln \left( 1-\Phi (z(T_{i}^{\prime },\bar{x}_{i}^{\prime })) \right)+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [\Phi (z_{Ri}^{\prime \prime })-\Phi (z_{Li}^{\prime \prime })]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; z_{Ri}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Ri}^{\prime \prime },\bar{x}_{i}^{\prime \prime })}{\sigma _{T}^{\prime }} \\ &lt;br /&gt;
 &amp;amp; z_{Li}^{\prime \prime }= &amp;amp; \frac{\ln I(T_{Li}^{\prime \prime },\bar{x}_{i}^{\prime \prime })}{\sigma _{T}^{\prime }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact time-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
=Confidence Intervals=&lt;br /&gt;
Using the same methodology as in previous sections, approximate confidence intervals can be derived and applied to all results of interest using the Fisher Matrix approach discussed in [[Appendix A: Brief Statistical Background|Appendix A]]. ALTA utilizes such intervals on all results.&lt;br /&gt;
&lt;br /&gt;
=Notes on Trigonometric Functions=&lt;br /&gt;
Trigonometric functions sometime are used in accelerated life tests. However ALTA does not include them. In fact, a trigonometric function can be defined by its frequency and magnitude. Frequency and magnitude then can be treated as two constant stresses. The GLL model discussed in [[General Log-Linear Relationship]] then can be applied for modeling.&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=General_Log-Linear_Relationship&amp;diff=64900</id>
		<title>General Log-Linear Relationship</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=General_Log-Linear_Relationship&amp;diff=64900"/>
		<updated>2017-02-01T23:26:31Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: /* GLL Likelihood Function */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Navigation box}}&lt;br /&gt;
&#039;&#039;This article also appears in the [[Multivariable_Relationships:_General_Log-Linear_and_Proportional_Hazards|Accelerated Life Testing Data Analysis Reference]] book.&#039;&#039; &amp;lt;/noinclude&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When a test involves multiple accelerating stresses or requires the inclusion of an engineering variable, a &amp;lt;noinclude&amp;gt; [[Multivariable_Relationships:_General_Log-Linear_and_Proportional_Hazards|general multivariable relationship]]&amp;lt;/noinclude&amp;gt; &amp;lt;includeonly&amp;gt;general multivariable relationship&amp;lt;/includeonly&amp;gt; is needed. Such a relationship is the general log-linear relationship, which describes a life characteristic as a function of a vector of &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; stresses, or &amp;lt;math&amp;gt;\underline{X}=({{X}_{1}},{{X}_{2}}...{{X}_{n}}).\,\!&amp;lt;/math&amp;gt; ALTA includes this relationship and allows up to eight stresses. Mathematically the relationship is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(\underline{X})={{e}^{{{\alpha }_{0}}+\underset{j=1}{\overset{n}{\mathop{\sum }}}\,{{\alpha }_{j}}{{X}_{j}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{\alpha }_{0}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{\alpha }_{j}}\,\!&amp;lt;/math&amp;gt; are model parameters.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; is a vector of &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; stresses.&lt;br /&gt;
&lt;br /&gt;
This relationship can be further modified through the use of transformations and can be reduced to the relationships discussed previously, if so desired. As an example, consider a single stress application of this relationship and an inverse transformation on &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;V=1/X\,\!&amp;lt;/math&amp;gt; or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; L(V)= &amp;amp; {{e}^{{{\alpha }_{0}}+\tfrac{{{\alpha }_{1}}}{V}}} =\ &amp;amp; {{e}^{{{\alpha }_{0}}}}{{e}^{\tfrac{{{\alpha }_{1}}}{V}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be easily seen that the generalized log-linear relationship with a single stress and an inverse transformation has been reduced to the [[Arrhenius Relationship|Arrhenius relationship]], where: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; C= &amp;amp; {{e}^{{{\alpha }_{0}}}} \\ &lt;br /&gt;
 &amp;amp; B= &amp;amp; {{\alpha }_{1}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(V)=C{{e}^{\tfrac{B}{V}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, when one chooses to apply a logarithmic transformation on &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;X=\ln (V)\,\!&amp;lt;/math&amp;gt;, the relationship would reduce to the [[Inverse Power Law Relationship|Inverse Power Law relationship]]. Furthermore, if more than one stress is present, one could choose to apply a different transformation to each stress to create combination relationships similar to the [[Temperature-Humidity Relationship|Temperature-Humidity]] and the [[Temperature-NonThermal Relationship|Temperature-Non Thermal]]. ALTA has three built-in transformation options, namely:&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|None||	&amp;lt;math&amp;gt;X=V\,\!&amp;lt;/math&amp;gt;||	Exponential LSR&lt;br /&gt;
|-&lt;br /&gt;
|Reciprocal||	 &amp;lt;math&amp;gt;X=1/V\,\!&amp;lt;/math&amp;gt;|| 	Arrhenius LSR&lt;br /&gt;
|-&lt;br /&gt;
|Logarithmic||	 &amp;lt;math&amp;gt;X=\ln (V)\,\!&amp;lt;/math&amp;gt;|| 	Power LSR&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The power of the relationship and this formulation becomes evident once one realizes that 6,561 unique life-stress relationships are possible (when allowing a maximum of eight stresses). When combined with the life distributions available in ALTA, almost 20,000 models can be created.&lt;br /&gt;
&lt;br /&gt;
==Using the GLL Model==&lt;br /&gt;
Like the previous relationships, the general log-linear relationship can be combined with any of the available life distributions by expressing a life characteristic from that distribution with the GLL relationship. A brief overview of the GLL-distribution models available in ALTA is presented next.&lt;br /&gt;
&lt;br /&gt;
===GLL Exponential===&lt;br /&gt;
The GLL-exponential model can be derived by setting &amp;lt;math&amp;gt;m=L(\underline{X})\,\!&amp;lt;/math&amp;gt; in the exponential &#039;&#039;pdf&#039;&#039;, yielding the following GLL-exponential &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;f(t,\underline{X})={{e}^{-\left( {{\alpha }_{0}}+\underset{j=1}{\overset{n}{\mathop{\sum }}}\,{{\alpha }_{j}}{{X}_{j}} \right)}}{{e}^{-\left( {{\alpha }_{0}}+\underset{j=1}{\overset{n}{\mathop{\sum }}}\,{{\alpha }_{j}}{{X}_{j}} \right)\cdot t}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The total number of unknowns to solve for in this model is &amp;lt;math&amp;gt;n+1\,\!&amp;lt;/math&amp;gt; (i.e., &amp;lt;math&amp;gt;{{a}_{0}},{{a}_{1}},...{{a}_{n}}).\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GLL Weibull===&lt;br /&gt;
The GLL-Weibull model can be derived by setting &amp;lt;math&amp;gt;\eta =L(\underline{X})\,\!&amp;lt;/math&amp;gt; in Weibull &#039;&#039;pdf&#039;&#039;, yielding the following GLL-Weibull &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;f(t,\underline{X})=\beta \cdot {{t}^{\beta -1}}{{e}^{-\beta \left( {{\alpha }_{0}}+\underset{j=1}{\overset{n}{\mathop{\sum }}}\,{{\alpha }_{j}}{{X}_{j}} \right)}}{{e}^{-{{t}^{\beta }}{{e}^{-\beta \left( {{\alpha }_{0}}+\underset{j=1}{\overset{n}{\mathop{\sum }}}\,{{\alpha }_{j}}{{X}_{j}} \right)}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The total number of unknowns to solve for in this model is &amp;lt;math&amp;gt;n+2\,\!&amp;lt;/math&amp;gt; (i.e., &amp;lt;math&amp;gt;\beta ,{{a}_{0}},{{a}_{1}},...{{a}_{n}}).\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GLL Lognormal===&lt;br /&gt;
The GLL-lognormal model can be derived by setting &amp;lt;math&amp;gt;\breve{T}=L(\underline{X})\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
in the lognormal &#039;&#039;pdf&#039;&#039;, yielding the following GLL-lognormal &#039;&#039;pdf&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;f(t,\underline{X})=\frac{1}{t\text{ }{{\sigma }_{{{T}&#039;}}}\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{\left( \tfrac{{T}&#039;-{{\alpha }_{0}}-\underset{j=1}{\overset{n}{\mathop{\sum }}}\,{{\alpha }_{j}}{{X}_{j}}}{{{\sigma }_{{{T}&#039;}}}} \right)}^{2}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The total number of unknowns to solve for in this model is &amp;lt;math&amp;gt;n+2\,\!&amp;lt;/math&amp;gt; (i.e., &amp;lt;math&amp;gt;{{\sigma }_{{{T}&#039;}}},{{a}_{0}},{{a}_{1}},...{{a}_{n}}).\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GLL Likelihood Function===&lt;br /&gt;
The maximum likelihood estimation method can be used to determine the parameters for the GLL relationship and the selected life distribution. For each distribution, the likelihood function can be derived, and the parameters of model (the distribution parameters and the GLL parameters) can be obtained by maximizing the log-likelihood function. For example, the log-likelihood function for the Weibull distribution is given by: &lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \ln (L) = \Lambda = &amp;amp; \underset{i=1}{\overset{{{F}_{e}}}{\mathop \sum }}\,{{N}_{i}}\ln \left[ \beta \cdot T_{i}^{\beta -1}{{e}^{-T_{i}^{\beta }\cdot {{e}^{-\beta \left( {{\alpha }_{0}}+\mathop{\sum}_{j=1}^{n}{{a}_{j}}{{x}_{i,j}} \right)}}}}{{e}^{-\beta \left( {{\alpha }_{0}}+\mathop{\sum}_{j=1}^{n}{{a}_{j}}{{x}_{i,j}} \right)}} \right] \\&lt;br /&gt;
 &amp;amp; -\underset{i=1}{\overset{S}{\mathop \sum }}\,N_{i}^{\prime }{{\left( T_{i}^{\prime } \right)}^{\beta }}{{e}^{-\beta \left( {{\alpha }_{0}}+\mathop{\sum}_{j=1}^{n}{{a}_{j}}{{x}_{i,j}} \right)}}+\overset{FI}{\mathop{\underset{i=1}{\mathop{\underset{}{\overset{}{\mathop \sum }}\,}}\,}}\,N_{i}^{\prime \prime }\ln [R_{Li}^{\prime \prime }-R_{Ri}^{\prime \prime }]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; R_{Li}^{\prime \prime }= &amp;amp; {{e}^{-{{\left( T_{Li}^{\prime \prime }{{e}^{{{\alpha }_{0}}+\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}}} \right)}^{\beta }}}} \\ &lt;br /&gt;
 &amp;amp; R_{Ri}^{\prime \prime }= &amp;amp; {{e}^{-{{\left( T_{Ri}^{\prime \prime }{{e}^{{{\alpha }_{0}}+\underset{j=1}{\mathop{\overset{n}{\mathop{\mathop{}_{}^{}}}\,}}\,{{\alpha }_{j}}{{x}_{j}}}} \right)}^{\beta }}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{F}_{e}}\,\!&amp;lt;/math&amp;gt; is the number of groups of exact times-to-failure data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; is the number of times-to-failure in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; time-to-failure data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is the failure rate parameter (unknown).&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is the exact failure time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is the number of groups of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the number of suspensions in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of suspension data points.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{i}^{\prime }\,\!&amp;lt;/math&amp;gt; is the running time of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; suspension data group.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;FI\,\!&amp;lt;/math&amp;gt; is the number of interval data groups.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;N_{i}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the number of intervals in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; group of data intervals.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Li}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the beginning of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;T_{Ri}^{\prime \prime }\,\!&amp;lt;/math&amp;gt; is the ending of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; interval.&lt;br /&gt;
&lt;br /&gt;
==GLL Example==&lt;br /&gt;
{{:General_Log-Linear_Relationship_Example}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;noinclude&amp;gt;=Indicator Variables=&lt;br /&gt;
Another advantage of the multivariable relationships included in ALTA is that they allow for simultaneous analysis of continuous and categorical variables. Categorical variables are variables that take on discrete values such as the lot designation for products from different manufacturing lots. In this example, lot is a categorical variable, and it can be expressed in terms of indicator variables. Indicator variables only take a value of 1 or 0. For example, consider a sample of test units. A number of these units were obtained from Lot 1, others from Lot 2, and the rest from Lot 3. These three lots can be represented with the use of indicator variables, as follows:&lt;br /&gt;
&lt;br /&gt;
*Define two indicator variables, &amp;lt;math&amp;gt;{{X}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*For the units from Lot 1, &amp;lt;math&amp;gt;{{X}_{1}}=1,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}=0.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*For the units from Lot 2, &amp;lt;math&amp;gt;{{X}_{1}}=0,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}=1.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*For the units from Lot 3, &amp;lt;math&amp;gt;{{X}_{1}}=0,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}=0.\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Assume that an accelerated test was performed with these units, and temperature was the accelerated stress. In this case, the GLL relationship can be used to analyze the data. From this relationship we get:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;L(\underline{X})={{e}^{{{\alpha }_{0}}+{{\alpha }_{1}}{{X}_{1}}+{{\alpha }_{2}}{{X}_{2}}+{{\alpha }_{3}}{{X}_{3}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{X}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{X}_{2}}\,\!&amp;lt;/math&amp;gt; are the indicator variables, as defined above.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;{{X}_{3}}=\tfrac{1}{T},\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is the temperature.&lt;br /&gt;
&lt;br /&gt;
The data can now be entered in ALTA and, with the assumption of an underlying life distribution and using MLE, the parameters of this model can be obtained.&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=General_Log-Linear_Relationship_Example&amp;diff=64899</id>
		<title>General Log-Linear Relationship Example</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=General_Log-Linear_Relationship_Example&amp;diff=64899"/>
		<updated>2017-02-01T22:59:42Z</updated>

		<summary type="html">&lt;p&gt;Sharon Honecker: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Banner_ALTA_Examples}}&lt;br /&gt;
&#039;&#039;This example appears in the [[Multivariable_Relationships:_General_Log-Linear_and_Proportional_Hazards|Accelerated Life Testing Data Analysis Reference]] book.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
Consider the data summarized in the following tables. These data illustrate a typical three-stress type accelerated test.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;Stress Profile Summary&amp;lt;/center&amp;gt; &lt;br /&gt;
[[Image:ALTA11t1.png|center|400px|]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;Failure Data&amp;lt;/center&amp;gt; &lt;br /&gt;
[[Image:ALTA11t2.png|center|550px|]]&lt;br /&gt;
&lt;br /&gt;
The data in the second table are analyzed assuming a Weibull distribution, an Arrhenius life-stress relationship for temperature and an inverse power life-stress relationship for voltage. No transformation is performed on the operation type. The operation type variable is treated as an indicator variable that takes a discrete value of 0 for an on/off operation and 1 for a continuous operation. The following figure shows the stress types and their transformations in ALTA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:StressTransformation.gif|center|450px|]]&lt;br /&gt;
&lt;br /&gt;
The GLL relationship then becomes:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\eta ={{e}^{{{\alpha }_{0}}+{{\alpha }_{1}}\tfrac{1}{{{V}_{1}}}+{{\alpha }_{2}}\ln ({{V}_{2}})+{{\alpha }_{3}}{{V}_{3}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The resulting relationship after performing these transformations is:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \eta = &amp;amp; {{e}^{{{\alpha }_{0}}}}{{e}^{{{\alpha }_{1}}\tfrac{1}{{{V}_{1}}}}}{{e}^{{{\alpha }_{2}}\ln ({{V}_{2}})}}{{e}^{{{\alpha }_{3}}{{V}_{3}}}} =\ &amp;amp; {{e}^{{{\alpha }_{0}}}}{{e}^{{{\alpha }_{1}}\tfrac{1}{{{V}_{1}}}}}V_{2}^{{{\alpha }_{2}}}{{e}^{{{\alpha }_{3}}{{V}_{3}}}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, the parameter &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; of the Arrhenius relationship is equal to the log-linear coefficient &amp;lt;math&amp;gt;{{\alpha }_{1}}\,\!&amp;lt;/math&amp;gt;, and the parameter &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; of the inverse power relationship is equal to (&amp;lt;math&amp;gt;-{{\alpha}_{2}}\,\!&amp;lt;/math&amp;gt;). Therefore &amp;lt;math&amp;gt;\eta \,\!&amp;lt;/math&amp;gt; can also be written as:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\eta ={{e}^{{{\alpha }_{0}}}}{{e}^{\tfrac{B}{{{V}_{1}}}}}V_{2}^{n}{{e}^{{{\alpha }_{3}}{{V}_{3}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The activation energy of the Arrhenius relationship can be calculated by multiplying B with Boltzmann&#039;s constant.&lt;br /&gt;
&lt;br /&gt;
The best fit values for the parameters in this case are:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  \beta =  &amp;amp; 3.7483;\text{ }{{\alpha }_{0}} = -6.0220;\text{ }{{\alpha }_{1}} = 5776.9341; \\ &lt;br /&gt;
  {{\alpha }_{2}} = &amp;amp; -1.4340;\text{ }{{\alpha }_{3}} = 0.6242.  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the parameters are estimated, further analysis on the data can be performed. First, using ALTA, a Weibull probability plot of the data can be obtained, as shown next.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA11.1.gif|center|600px|Weibull probability plot for all covariates.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Several types of information about the model as well as the data can be obtained from a probability plot. For example, the choice of an underlying distribution and the assumption of a common slope (shape parameter) can be examined. In this example, the linearity of the data supports the use of the Weibull distribution. In addition, the data appear parallel on this plot, therefore reinforcing the assumption of a common beta. Further statistical analysis can and should be performed for these purposes as well.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Life vs. Stress plot is a very common plot for the analysis of accelerated data. Life vs. Stress plots can be very useful in assessing the effect of each stress on a product&#039;s failure. In this case, since the life is a function of three stresses, three different plots can be created. Such plots are created by holding two of the stresses constant at the desired use level, and varying the remaining one. The use stress levels for this example are 328K for temperature and 10V for voltage. For the operation type, a decision has to be made by the engineers as to whether they implement an on/off or continuous operation. The next two figures display the effects of temperature and voltage on the life of the product.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA11.2.gif|center|600px|Effects of temperature on life.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA11.3.gif|center|600px|Effects of voltage on life.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The effects of the two different operation types on life can be observed in the next figure. It can be seen that the on/off cycling has a greater effect on the life of the product in terms of accelerating failure than the continuous operation. In other words, a higher reliability can be achieved by running the product continuously.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ALTA11.4.gif|center|600px|Effect of operation type on life.]]&lt;/div&gt;</summary>
		<author><name>Sharon Honecker</name></author>
	</entry>
</feed>