<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.reliawiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sam+Eisenberg</id>
	<title>ReliaWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.reliawiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sam+Eisenberg"/>
	<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php/Special:Contributions/Sam_Eisenberg"/>
	<updated>2026-04-04T15:21:57Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.0</generator>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Crow-AMSAA_(NHPP)&amp;diff=65542</id>
		<title>Crow-AMSAA (NHPP)</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Crow-AMSAA_(NHPP)&amp;diff=65542"/>
		<updated>2020-04-13T16:14:38Z</updated>

		<summary type="html">&lt;p&gt;Sam Eisenberg: Updated failure terminated denominator per R-DE4648.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{template:RGA BOOK|3.2|Crow-AMSAA}}&lt;br /&gt;
Dr. Larry H. Crow [[RGA_References|[17]]] noted that the [[Duane Model]] could be stochastically represented as a Weibull process, allowing for statistical procedures to be used in the application of this model in reliability growth. This statistical extension became what is known as the Crow-AMSAA (NHPP) model. This method was first developed at the U.S. Army Materiel Systems Analysis Activity (AMSAA). It is frequently used on systems when usage is measured on a continuous scale. It can also be applied for the analysis of one shot items when there is high reliability and a large number of trials.&lt;br /&gt;
&lt;br /&gt;
Test programs are generally conducted on a phase by phase basis. The Crow-AMSAA model is designed for tracking the reliability within a test phase and not across test phases. A development testing program may consist of several separate test phases. If corrective actions are introduced during a particular test phase, then this type of testing and the associated data are appropriate for analysis by the Crow-AMSAA model. The model analyzes the reliability growth progress within each test phase and can aid in determining the following:&lt;br /&gt;
&lt;br /&gt;
*Reliability of the configuration currently on test&lt;br /&gt;
*Reliability of the configuration on test at the end of the test phase&lt;br /&gt;
*Expected reliability if the test time for the phase is extended&lt;br /&gt;
*Growth rate&lt;br /&gt;
*Confidence intervals&lt;br /&gt;
*Applicable goodness-of-fit tests&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
The reliability growth pattern for the Crow-AMSAA model is exactly the same pattern as for the [[Duane Model|Duane postulate]], that is, the cumulative number of failures is linear when plotted on ln-ln scale. Unlike the Duane postulate, the Crow-AMSAA model is statistically based. Under the Duane postulate, the failure rate is linear on ln-ln scale. However, for the Crow-AMSAA model statistical structure, the failure intensity of the underlying non-homogeneous Poisson process (NHPP) is linear when plotted on ln-ln scale.&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;N(t)\,\!&amp;lt;/math&amp;gt; be the cumulative number of failures observed in cumulative test time &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;,  and let &amp;lt;math&amp;gt;\rho (t)\,\!&amp;lt;/math&amp;gt; be the failure intensity for the Crow-AMSAA model. Under the NHPP model, &amp;lt;math&amp;gt;\rho (t)\Delta t\,\!&amp;lt;/math&amp;gt; is approximately the probably of a failure occurring over the interval &amp;lt;math&amp;gt;[t,t+\Delta t]\,\!&amp;lt;/math&amp;gt; for small &amp;lt;math&amp;gt;\Delta t\,\!&amp;lt;/math&amp;gt;. In addition, the expected number of failures experienced over the test interval &amp;lt;math&amp;gt;[0,T]\,\!&amp;lt;/math&amp;gt; under the Crow-AMSAA model is given by:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;E[N(T)]=\int_{0}^{T}\rho (t)dt\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Crow-AMSAA model assumes that &amp;lt;math&amp;gt;\rho (T)\,\!&amp;lt;/math&amp;gt; may be approximated by the Weibull failure rate function: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\rho (T)=\frac{\beta }{{{\eta }^{\beta }}}{{T}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, if &amp;lt;math&amp;gt;\lambda =\tfrac{1}{{{\eta }^{\beta }}},\,\!&amp;lt;/math&amp;gt; the intensity function, &amp;lt;math&amp;gt;\rho (T),\,\!&amp;lt;/math&amp;gt; or the instantaneous failure intensity, &amp;lt;math&amp;gt;{{\lambda }_{i}}(T)\,\!&amp;lt;/math&amp;gt;, is defined as: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{\lambda }_{i}}(T)=\lambda \beta {{T}^{\beta -1}},\text{with }T&amp;gt;0,\text{ }\lambda &amp;gt;0\text{ and }\beta &amp;gt;0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the special case of exponential failure times, there is no growth and the failure intensity, &amp;lt;math&amp;gt;\rho (t)\,\!&amp;lt;/math&amp;gt;, is equal to &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt;. In this case, the expected number of failures is given by:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   E[N(T)]=  &amp;amp; \int_{0}^{T}\rho (t)dt \\ &lt;br /&gt;
  =  &amp;amp; \lambda T  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order for the plot to be linear when plotted on ln-ln scale under the general reliability growth case, the following must hold true where the expected number of failures is equal to:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   E[N(T)]=  &amp;amp; \int_{0}^{T}\rho (t)dt \\ &lt;br /&gt;
  =  &amp;amp; \lambda {{T}^{\beta }}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To put a statistical structure on the reliability growth process, consider again the special case of no growth. In this case the number of failures, &amp;lt;math&amp;gt;N(T),\,\!&amp;lt;/math&amp;gt; experienced during the testing over &amp;lt;math&amp;gt;[0,T]\,\!&amp;lt;/math&amp;gt; is random. The expected number of failures, &amp;lt;math&amp;gt;N(T),\,\!&amp;lt;/math&amp;gt; is said to follow the homogeneous (constant) Poisson process with mean &amp;lt;math&amp;gt;\lambda T\,\!&amp;lt;/math&amp;gt; and is given by:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\underset{}{\overset{}{\mathop{\Pr }}}\,[N(T)=n]=\frac{{{(\lambda T)}^{n}}{{e}^{-\lambda T}}}{n!};\text{ }n=0,1,2,\ldots \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Crow-AMSAA model generalizes this no growth case to allow for reliability growth due to corrective actions. This generalization keeps the Poisson distribution for the number of failures but allows for the expected number of failures, &amp;lt;math&amp;gt;E[N(T)],\,\!&amp;lt;/math&amp;gt; to be linear when plotted on ln-ln scale. The Crow-AMSAA model lets &amp;lt;math&amp;gt;E[N(T)]=\lambda {{T}^{\beta }}\,\!&amp;lt;/math&amp;gt;. The probability that the number of failures, &amp;lt;math&amp;gt;N(T),\,\!&amp;lt;/math&amp;gt; will be equal to &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; under growth is then given by the Poisson distribution:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\underset{}{\overset{}{\mathop{\Pr }}}\,[N(T)=n]=\frac{{{(\lambda {{T}^{\beta }})}^{n}}{{e}^{-\lambda {{T}^{\beta }}}}}{n!};\text{ }n=0,1,2,\ldots \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the general growth situation, and the number of failures, &amp;lt;math&amp;gt;N(T)\,\!&amp;lt;/math&amp;gt;, follows a non-homogeneous Poisson process. The exponential, &amp;quot;no growth&amp;quot; homogeneous Poisson process is a special case of the non-homogeneous Crow-AMSAA model. This is reflected in the Crow-AMSAA model parameter where &amp;lt;math&amp;gt;\beta =1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
The cumulative failure rate, &amp;lt;math&amp;gt;{{\lambda }_{c}}\,\!&amp;lt;/math&amp;gt;, is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{\lambda }_{c}}=\lambda {{T}^{\beta -1}}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The cumulative &amp;lt;math&amp;gt;MTB{{F}_{c}}\,\!&amp;lt;/math&amp;gt; is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;MTB{{F}_{c}}=\frac{1}{\lambda }{{T}^{1-\beta }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As mentioned above, the local pattern for reliability growth within a test phase is the same as the growth pattern observed by [[Duane Model|Duane]]. The Duane &amp;lt;math&amp;gt;MTB{{F}_{c}}\,\!&amp;lt;/math&amp;gt; is equal to: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;MTB{{F}_{{{c}_{DUANE}}}}=b{{T}^{\alpha }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the Duane cumulative failure rate, &amp;lt;math&amp;gt;{{\lambda }_{c}}\,\!&amp;lt;/math&amp;gt;, is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{\lambda }_{{{c}_{DUANE}}}}=\frac{1}{b}{{T}^{-\alpha }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus a relationship between Crow-AMSAA parameters and Duane parameters can be developed, such that: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{b}_{DUANE}}=  &amp;amp; \frac{1}{{{\lambda }_{AMSAA}}} \\ &lt;br /&gt;
  {{\alpha }_{DUANE}}=  &amp;amp; 1-{{\beta }_{AMSAA}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that these relationships are not absolute. They change according to how the parameters (slopes, intercepts, etc.) are defined when the analysis of the data is performed. For the exponential case, &amp;lt;math&amp;gt;\beta =1\,\!&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;{{\lambda }_{i}}(T)=\lambda \,\!&amp;lt;/math&amp;gt;, a constant. For &amp;lt;math&amp;gt;\beta &amp;gt;1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\lambda }_{i}}(T)\,\!&amp;lt;/math&amp;gt; is increasing. This indicates a deterioration in system reliability. For &amp;lt;math&amp;gt;\beta &amp;lt;1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{{\lambda }_{i}}(T)\,\!&amp;lt;/math&amp;gt; is decreasing. This is indicative of reliability growth. Note that the model assumes a Poisson process with the Weibull intensity function, not the Weibull distribution. Therefore, statistical procedures for the Weibull distribution do not apply for this model. The parameter &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; is called a scale parameter because it depends upon the unit of measurement chosen for &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt;, while &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the shape parameter that characterizes the shape of the graph of the intensity function.&lt;br /&gt;
&lt;br /&gt;
The total number of failures, &amp;lt;math&amp;gt;N(T)\,\!&amp;lt;/math&amp;gt;, is a random variable with Poisson distribution. Therefore, the probability that exactly &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; failures occur by time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;P[N(T)=n]=\frac{{{[\theta (T)]}^{n}}{{e}^{-\theta (T)}}}{n!}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The number of failures occurring in the interval from &amp;lt;math&amp;gt;{{T}_{1}}\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;{{T}_{2}}\,\!&amp;lt;/math&amp;gt; is a random variable having a Poisson distribution with mean: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\theta ({{T}_{2}})-\theta ({{T}_{1}})=\lambda (T_{2}^{\beta }-T_{1}^{\beta })\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The number of failures in any interval is statistically independent of the number of failures in any interval that does not overlap the first interval. At time &amp;lt;math&amp;gt;{{T}_{0}}\,\!&amp;lt;/math&amp;gt;, the failure intensity is &amp;lt;math&amp;gt;{{\lambda }_{i}}({{T}_{0}})=\lambda \beta T_{0}^{\beta -1}\,\!&amp;lt;/math&amp;gt;. If improvements are not made to the system after time &amp;lt;math&amp;gt;{{T}_{0}}\,\!&amp;lt;/math&amp;gt;, it is assumed that failures would continue to occur at the constant rate &amp;lt;math&amp;gt;{{\lambda }_{i}}({{T}_{0}})=\lambda \beta T_{0}^{\beta -1}\,\!&amp;lt;/math&amp;gt;. Future failures would then follow an exponential distribution with mean &amp;lt;math&amp;gt;m({{T}_{0}})=\tfrac{1}{\lambda \beta T_{0}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;. The instantaneous MTBF of the system at time &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;m(T)=\frac{1}{\lambda \beta {{T}^{\beta -1}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;m(T)\,\!&amp;lt;/math&amp;gt; is also called the demonstrated (or achieved) MTBF.&lt;br /&gt;
&lt;br /&gt;
===Note About Applicability===&lt;br /&gt;
The [[Duane Model|Duane]] and Crow-AMSAA models are the most frequently used reliability growth models. Their relationship comes from the fact that both make use of the underlying observed linear relationship between the logarithm of cumulative MTBF and cumulative test time. However, the Duane model does not provide a capability to test whether the change in MTBF observed over time is significantly different from what might be seen due to random error between phases. The Crow-AMSAA model allows for such assessments. Also, the Crow-AMSAA allows for development of hypothesis testing procedures to determine growth presence in the data (where &amp;lt;math&amp;gt;\beta &amp;lt;1\,\!&amp;lt;/math&amp;gt; indicates that there is growth in MTBF, &amp;lt;math&amp;gt;\beta =1\,\!&amp;lt;/math&amp;gt; indicates a constant MTBF and &amp;lt;math&amp;gt;\beta &amp;gt;1\,\!&amp;lt;/math&amp;gt; indicates a decreasing MTBF). Additionally, the Crow-AMSAA model views the process of reliability growth as probabilistic, while the Duane model views the process as deterministic.&lt;br /&gt;
&lt;br /&gt;
==Failure Times Data==&lt;br /&gt;
A description of Failure Times Data is presented in the [[RGA Data Types#Failure_Times_Data|RGA Data Types]] page.&lt;br /&gt;
===Parameter Estimation for Failure Times Data=== &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM OTHER LOCATIONS IN THIS DOCUMENT AND ALSO FROM Crow Extended - Continuous Evaluation. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
The parameters for the Crow-AMSAA (NHPP) model are estimated using maximum likelihood estimation (MLE). The probability density function (&#039;&#039;pdf&#039;&#039;) of the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; event given that the &amp;lt;math&amp;gt;{{(i-1)}^{th}}\,\!&amp;lt;/math&amp;gt; event occurred at &amp;lt;math&amp;gt;{{T}_{i-1}}\,\!&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;f({{T}_{i}}|{{T}_{i-1}})=\frac{\beta }{\eta }{{\left( \frac{{{T}_{i}}}{\eta } \right)}^{\beta -1}}\cdot {{e}^{-\tfrac{1}{{{\eta }^{\beta }}}\left( T_{i}^{\beta }-T_{i-1}^{\beta } \right)}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\lambda =\tfrac{1}{{{\eta }^{\beta }}},\,\!&amp;lt;/math&amp;gt;, the likelihood function is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;L={{\lambda }^{n}}{{\beta }^{n}}{{e}^{-\lambda {{T}^{*\beta }}}}\underset{i=1}{\overset{n}{\mathop \prod }}\,T_{i}^{\beta -1}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;{{T}^{*}}\,\!&amp;lt;/math&amp;gt; is the termination time and is given by: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{T}^{*}}=\left\{ \begin{matrix}&lt;br /&gt;
   {{T}_{n}}\text{ if the test is failure terminated}  \\&lt;br /&gt;
   T&amp;gt;{{T}_{n}}\text{ if the test is time terminated}  \\&lt;br /&gt;
\end{matrix} \right\}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Taking the natural log on both sides: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\Lambda =n\ln \lambda +n\ln \beta -\lambda {{T}^{*\beta }}+(\beta -1)\underset{i=1}{\overset{n}{\mathop \sum }}\,\ln {{T}_{i}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And differentiating with respect to &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; yields: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\frac{\partial \Lambda }{\partial \lambda }=\frac{n}{\lambda }-{{T}^{*\beta }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set equal to zero and solve for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; : &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\hat{\lambda }=\frac{n}{{{T}^{*\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now differentiate with respect to &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; : &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\frac{\partial \Lambda }{\partial \beta }=\frac{n}{\beta }-\lambda {{T}^{*\beta }}\ln {{T}^{*}}+\underset{i=1}{\overset{n}{\mathop \sum }}\,\ln {{T}_{i}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set equal to zero and solve for &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; : &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\hat{\beta }=\frac{n}{n\ln {{T}^{*}}-\underset{i=1}{\overset{n}{\mathop{\sum }}}\,\ln {{T}_{i}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This equation is used for both failure terminated and time terminated test data.&lt;br /&gt;
&lt;br /&gt;
====Biasing and Unbiasing of Beta==== &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM: Crow Extended - Continuous Evaluation. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
The equation above returns the biased estimate, &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt;. The unbiased estimate, &amp;lt;math&amp;gt;\bar{\beta }\,\!&amp;lt;/math&amp;gt;, can be calculated by using the following relationships. For time terminated data (the test ends after a specified test time):&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\bar{\beta }=\frac{N-1}{N}\hat{\beta }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For failure terminated data (the test ends after a specified number of failures):&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\bar{\beta }=\frac{N-2}{N-1}\hat{\beta }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt; is returned. &amp;lt;math&amp;gt;\bar{\beta }\,\!&amp;lt;/math&amp;gt; can be returned by selecting the &#039;&#039;&#039;Calculate unbiased beta&#039;&#039;&#039; option on the Calculations tab of the Application Setup.&lt;br /&gt;
&lt;br /&gt;
===Cramér-von Mises Test===&lt;br /&gt;
The Cramér-von Mises (CVM) goodness-of-fit test validates the hypothesis that the data follows a non-homogeneous Poisson process with a failure intensity equal to &amp;lt;math&amp;gt;u(t)=\lambda \beta {{t}^{\beta -1}}\,\!&amp;lt;/math&amp;gt;. This test can be applied when the failure data is complete over the continuous interval &amp;lt;math&amp;gt;[0,{{T}_{q}}]\,\!&amp;lt;/math&amp;gt; with no gaps in the data. The CVM data type applies to all data types when the failure times are known, except for Fleet data.&lt;br /&gt;
&lt;br /&gt;
If the individual failure times are known, a Cramér-von Mises statistic is used to test the null hypothesis that a non-homogeneous Poisson process with the failure intensity function &amp;lt;math&amp;gt;\rho \left( t \right)=\lambda \,\beta \,{{t}^{\beta -1}}\left( \lambda &amp;gt;0,\beta &amp;gt;0,t&amp;gt;0 \right)\,\!&amp;lt;/math&amp;gt; properly describes the reliability growth of a system. The Cramér-von Mises goodness-of-fit statistic is then given by the following expression:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;C_{M}^{2}=\frac{1}{12M}+\underset{i=1}{\overset{M}{\mathop \sum }}\,{{\left[ {{\left( \frac{{{T}_{i}}}{T} \right)}^{{\bar{\beta }}}}-\frac{2i-1}{2M} \right]}^{2}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;M=\left\{ \begin{matrix}&lt;br /&gt;
   N\text{ if the test is time terminated}  \\&lt;br /&gt;
   N-1\text{ if the test is failure terminated}  \\&lt;br /&gt;
\end{matrix} \right\}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
:&amp;lt;math&amp;gt;{\bar{\beta }}\,\!&amp;lt;/math&amp;gt; is the unbiased value of Beta.&lt;br /&gt;
&lt;br /&gt;
The failure times, &amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt;, must be ordered so that &amp;lt;math&amp;gt;{{T}_{1}}&amp;lt;{{T}_{2}}&amp;lt;\ldots &amp;lt;{{T}_{M}}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
If the statistic &amp;lt;math&amp;gt;C_{M}^{2}\,\!&amp;lt;/math&amp;gt; is less than the critical value corresponding to &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; for a chosen significance level, then you can fail to reject the null hypothesis that the Crow-AMSAA model adequately fits the data.&lt;br /&gt;
&lt;br /&gt;
====Critical Values====&lt;br /&gt;
The following table displays the critical values for the Cramér-von Mises goodness-of-fit test given the sample size, &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt;, and the significance level, &amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;6&amp;quot; style=&amp;quot;text-align:center&amp;quot;|&#039;&#039;&#039;Critical values for Cramér-von Mises test&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| ||colspan=&amp;quot;5&amp;quot; style=&amp;quot;text-align:center;&amp;quot;|&amp;lt;math&amp;gt;\alpha \,\!&amp;lt;/math&amp;gt; 				&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt;|| 0.20||	0.15||	0.10||	0.05||	0.01&lt;br /&gt;
|-&lt;br /&gt;
|2||	0.138||	0.149||	0.162||	0.175||	0.186&lt;br /&gt;
|-&lt;br /&gt;
|3||	0.121||	0.135||	0.154||	0.184||0.23&lt;br /&gt;
|-&lt;br /&gt;
|4||	0.121||	0.134||	0.155||	0.191||0.28&lt;br /&gt;
|-&lt;br /&gt;
|5||	0.121||	0.137||	0.160||	0.199||0.30&lt;br /&gt;
|-&lt;br /&gt;
|6||	0.123||	0.139||	0.162||	0.204||0.31&lt;br /&gt;
|-&lt;br /&gt;
|7||	0.124||	0.140||	0.165||	0.208||0.32&lt;br /&gt;
|-&lt;br /&gt;
|8||	0.124||	0.141||	0.165||	0.210||0.32&lt;br /&gt;
|-&lt;br /&gt;
|9||	0.125||	0.142||	0.167||	0.212||0.32&lt;br /&gt;
|-&lt;br /&gt;
|10||	0.125||	0.142||	0.167||	0.212||0.32&lt;br /&gt;
|-&lt;br /&gt;
|11||	0.126||	0.143||	0.169||	0.214||0.32&lt;br /&gt;
|-&lt;br /&gt;
|12||	0.126||	0.144||	0.169||	0.214||0.32&lt;br /&gt;
|-&lt;br /&gt;
|13||	0.126||	0.144||	0.169||	0.214||0.33&lt;br /&gt;
|-&lt;br /&gt;
|14||	0.126||	0.144||	0.169||	0.214||0.33&lt;br /&gt;
|-&lt;br /&gt;
|15||	0.126||	0.144||	0.169||	0.215||0.33&lt;br /&gt;
|-&lt;br /&gt;
|16||	0.127||	0.145||	0.171||	0.216|| 0.33&lt;br /&gt;
|-&lt;br /&gt;
|17||	0.127||	0.145||	0.171||	0.217||	0.33&lt;br /&gt;
|-&lt;br /&gt;
|18||	0.127||	0.146||	0.171||	0.217||	0.33&lt;br /&gt;
|-&lt;br /&gt;
|19||	0.127||	0.146||	0.171||	0.217||	0.33&lt;br /&gt;
|-&lt;br /&gt;
|20||	0.128||	0.146||	0.172||	0.217||	0.33&lt;br /&gt;
|-&lt;br /&gt;
|30||	0.128||	0.146||	0.172||	0.218||	0.33&lt;br /&gt;
|-&lt;br /&gt;
|60||	0.128||	0.147||	0.173||	0.220||	0.33&lt;br /&gt;
|-&lt;br /&gt;
|100||	0.129||	0.147||	0.173||	0.220||	0.34&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The significance level represents the probability of rejecting the hypothesis even if it&#039;s true. So, there is a risk associated with applying the goodness-of-fit test (i.e., there is a chance that the CVM will indicate that the model does not fit, when in fact it does). As the significance level is increased, the CVM test becomes more stringent. Keep in mind that the CVM test passes when the test statistic is less than the critical value. Therefore, the larger the critical value, the more room there is to work with (e.g., a CVM test with a significance level equal to 0.1 is more strict than a test with 0.01).&lt;br /&gt;
&lt;br /&gt;
===Confidence Bounds===&lt;br /&gt;
The RGA software provides two methods to estimate the confidence bounds for the Crow Extended model when applied to developmental testing data. The Fisher Matrix approach is based on the Fisher Information Matrix and is commonly employed in the reliability field. The Crow bounds were developed by Dr. Larry Crow. See the [[Crow-AMSAA Confidence Bounds]] chapter for details on how the confidence bounds are calculated. &lt;br /&gt;
&lt;br /&gt;
===Failure Times Data Examples===&lt;br /&gt;
====Example - Parameter Estimation====&lt;br /&gt;
&lt;br /&gt;
{{:Crow-AMSAA Parameter Estimation Example}}&lt;br /&gt;
&lt;br /&gt;
{{:Crow-AMSAA_Confidence_Bounds_Example}}&lt;br /&gt;
&lt;br /&gt;
==Multiple Systems==&lt;br /&gt;
When more than one system is placed on test during developmental testing, there are multiple data types which are available depending on the testing strategy and the format of the data. The data types that allow for the analysis of multiple systems using the Crow-AMSAA (NHPP) model are given below:&lt;br /&gt;
&lt;br /&gt;
*[[Crow-AMSAA_(NHPP)#Multiple Systems (Known Operating Times)|Multiple Systems (Known Operating Times)]]&lt;br /&gt;
*[[Crow-AMSAA_(NHPP)#Multiple Systems (Concurrent Operating Times)|Multiple Systems (Concurrent Operating Times)]]&lt;br /&gt;
*[[Crow-AMSAA_(NHPP)#Multiple Systems with Dates|Multiple Systems with Dates]]&lt;br /&gt;
&lt;br /&gt;
===Goodness-of-fit Tests===&lt;br /&gt;
For all multiple systems data types, the [[Crow-AMSAA (NHPP)#Cram.C3.A9r-von_Mises_Test|Cramér-von Mises (CVM) Test]] is available. For Multiple Systems (Concurrent Operating Times) and Multiple Systems with Dates, two additional tests are also available: [[Hypothesis Tests#Laplace_Trend_Test|Laplace Trend Test]] and [[Hypothesis Tests#Common_Beta_Hypothesis_Test|Common Beta Hypothesis]].&lt;br /&gt;
&lt;br /&gt;
===Multiple Systems (Known Operating Times)===&lt;br /&gt;
&lt;br /&gt;
A description of Multiple Systems (Known Operating Times) is presented on the [[RGA Data Types#Multiple_Systems_.28Known_Operating_Times.29|RGA Data Types]] page.&lt;br /&gt;
&lt;br /&gt;
Consider the data in the table below for two prototypes that were placed in a reliability growth test.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Developmental Test Data for Two Identical Systems&#039;&#039;&#039;	&amp;lt;/center&amp;gt;	&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot; style=&amp;quot;border-collapse: collapse;&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;5&amp;quot;&lt;br /&gt;
!Failure Number&lt;br /&gt;
!Failed Unit&lt;br /&gt;
!Test Time Unit 1 (hr)&lt;br /&gt;
!Test Time Unit 2 (hr)&lt;br /&gt;
!Total Test Time (hr)&lt;br /&gt;
!&amp;lt;math&amp;gt;ln{(T)}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|1||	1||	1.0||	1.7||	2.7||	0.99325&lt;br /&gt;
|-&lt;br /&gt;
|2||	1||	7.3||	3.0||	10.3||	2.33214&lt;br /&gt;
|-&lt;br /&gt;
|3||	2||	8.7||	3.8||	12.5||	2.52573&lt;br /&gt;
|-&lt;br /&gt;
|4||	2||	23.3||	7.3||	30.6||	3.42100&lt;br /&gt;
|-&lt;br /&gt;
|5||	2||	46.4||	10.6||	57.0||	4.04305&lt;br /&gt;
|-&lt;br /&gt;
|6||	1||	50.1||	11.2||	61.3||	4.11578&lt;br /&gt;
|-&lt;br /&gt;
|7||	1||	57.8||	22.2||	80.0||	4.38203&lt;br /&gt;
|-&lt;br /&gt;
|8||	2||	82.1||	27.4||	109.5||	4.69592&lt;br /&gt;
|-&lt;br /&gt;
|9||	2||	86.6||	38.4||	125.0||4.82831&lt;br /&gt;
|-&lt;br /&gt;
|10||	1||	87.0||	41.6||	128.6||	4.85671&lt;br /&gt;
|-&lt;br /&gt;
|11||	2||	98.7||	45.1||	143.8||	4.96842&lt;br /&gt;
|-&lt;br /&gt;
|12||	1||	102.2||	65.7||	167.9||	5.12337&lt;br /&gt;
|-&lt;br /&gt;
|13||	1||	139.2	||90.0||229.2||	5.43459&lt;br /&gt;
|-&lt;br /&gt;
|14||	1||	166.6||	130.1||	296.7||	5.69272&lt;br /&gt;
|-&lt;br /&gt;
|15||	2||	180.8||	139.8	||320.6||5.77019&lt;br /&gt;
|-&lt;br /&gt;
|16||	1||	181.3||	146.9||	328.2||	5.79362&lt;br /&gt;
|-&lt;br /&gt;
|17||	2||	207.9||	158.3	||366.2||5.90318&lt;br /&gt;
|-&lt;br /&gt;
|18||	2||	209.8||	186.9||	396.7||	5.98318&lt;br /&gt;
|-&lt;br /&gt;
|19||	2||	226.9||	194.2||	421.1||	6.04287&lt;br /&gt;
|-&lt;br /&gt;
|20||	1||	232.2||	206.0||	438.2||	6.08268&lt;br /&gt;
|-&lt;br /&gt;
|21||	2||	267.5||	233.7||	501.2||	6.21701&lt;br /&gt;
|-&lt;br /&gt;
|22||	2||	330.1||	289.9||	620.0||	6.42972&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The Failed Unit column indicates the system that failed and is meant to be informative, but it does not affect the calculations. To combine the data from both systems, the system ages are added together at the times when a failure occurred. This is seen in the Total Test Time column above. Once the single timeline is generated, then the calculations for the parameters Beta and Lambda are the same as the process presented for [[Crow-AMSAA (NHPP)#Parameter_Estimation_for_Failure_Times_Data|Failure Times Data]]. The results of this analysis would match the results of [[Crow-AMSAA (NHPP)#Failure_Times_-_Example_1|Failure Times - Example 1]].&lt;br /&gt;
&lt;br /&gt;
===Multiple Systems (Concurrent Operating Times)===&lt;br /&gt;
A description of Multiple Systems (Concurrent Operating Times) is presented on the [[RGA Data Types#Multiple_Systems_.28Concurrent_Operating_Times.29|RGA Data Types]] page.&lt;br /&gt;
&lt;br /&gt;
====Parameter Estimation for Multiple Systems (Concurrent Operating Times)====&lt;br /&gt;
To estimate the parameters, the equivalent system must first be determined. The equivalent single system (ESS) is calculated by summing the usage across all systems when a failure occurs. Keep in mind that Multiple Systems (Concurrent Operating Times) assumes that the systems are running simultaneously and accumulate the same usage. If the systems have different end times then the equivalent system must only account for the systems that are operating when a failure occurred. Systems with a start time greater than zero are shifted back to t = 0. This is the same as having a start time equal to zero and the converted end time is equal to the end time minus the start time. In addition, all failures times are adjusted by subtracting the start time from each value to ensure that all values occur within t = 0 and the adjusted end time. A start time greater than zero indicates that it is not known as to what events occurred at a time less than the start time. This may have been caused by the events during this period not being tracked and/or recorded properly. &lt;br /&gt;
&lt;br /&gt;
As an example, consider two systems have entered a reliability growth test. Both systems have a start time equal to zero and both begin the test with the same configuration. System 1 operated for 100 hours and System 2 operated for 125 hours. The failure times for each system are given below:&lt;br /&gt;
&lt;br /&gt;
*System 1: 25, 47, 80&lt;br /&gt;
*System 2: 15, 62, 89, 110&lt;br /&gt;
&lt;br /&gt;
To build the ESS, the total accumulated hours across both systems is taken into account when a failure occurs. Therefore, given the data for Systems 1 and 2, the ESS is comprised of the following events: 30, 50, 94, 124, 160, 178, 210.&lt;br /&gt;
&lt;br /&gt;
The ESS combines the data from both systems into a single timeline. The termination time for the ESS is (100 + 125) = 225 hours. The parameter estimates for &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{\lambda}\,\!&amp;lt;/math&amp;gt; are then calculated using the ESS. This process is the same as the method for [[Crow-AMSAA (NHPP)#Parameter_Estimation_for_Failure_Times_Data|Failure Times data]].&lt;br /&gt;
&lt;br /&gt;
====Example - Concurrent Operating Times====&lt;br /&gt;
{{:Concurrent Operating Times - Crow-AMSAA (NHPP) Example}}&lt;br /&gt;
&lt;br /&gt;
===Multiple Systems with Dates===&lt;br /&gt;
An overview of the Multiple Systems with Dates data type is presented on the [[RGA Data Types#Multiple_Systems_with_Dates|RGA Data Types]] page. While Multiple Systems with Dates requires a date for each event, including the start and end times for each system, once the equivalent single system is determined, the parameter estimation is the same as it is for Multiple Systems (Concurrent Operating Times). See [[Crow-AMSAA_(NHPP)#Parameter_Estimation_for_Multiple_Systems_.28Concurrent_Operating_Times.29|Parameter Estimation for Multiple Systems (Concurrent Operating Times)]] for details.&lt;br /&gt;
&lt;br /&gt;
==Grouped Data== &amp;lt;!-- THIS SECTION HEADER IS LINKED FROM: Operational Mission Profile Testing, Crow Extended, and Fleet Data Analysis. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). --&amp;gt;&lt;br /&gt;
A description of Grouped Data is presented in the [[RGA Data Types#Grouped_Failure_Times|RGA Data Types]] page.&lt;br /&gt;
===Parameter Estimation for Grouped Data===&lt;br /&gt;
For analyzing grouped data, we follow the same logic described previously for the [[Duane Model|Duane]] model. If the &amp;lt;math&amp;gt;E[N(T)]\,\!&amp;lt;/math&amp;gt; equation from the [[Crow-AMSAA_(NHPP)#Background|Background]] section above is linearized: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\ln [E(N(T))]=\ln \lambda +\beta \ln T&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
According to Crow [[RGA_References|[9]]], the likelihood function for the grouped data case, (where &amp;lt;math&amp;gt;{{n}_{1}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{n}_{2}},\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{n}_{3}},\ldots ,\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;{{n}_{k}}\,\!&amp;lt;/math&amp;gt; failures are observed and &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; is the number of groups), is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\underset{i=1}{\overset{k}{\mathop \prod }}\,\underset{}{\overset{}{\mathop{\Pr }}}\,({{N}_{i}}={{n}_{i}})=\underset{i=1}{\overset{k}{\mathop \prod }}\,\frac{{{(\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta })}^{{{n}_{i}}}}\cdot {{e}^{-(\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta })}}}{{{n}_{i}}!}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the MLE of &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; based on this relationship is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\hat{\lambda }=\frac{n}{T_{k}^{\hat{\beta }}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;n \,\!&amp;lt;/math&amp;gt; is the total number of failures from all the groups.&lt;br /&gt;
&lt;br /&gt;
The estimate of &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; is the value &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt; that satisfies: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\underset{i=1}{\overset{k}{\mathop \sum }}\,{{n}_{i}}\left[ \frac{T_{i}^{\hat{\beta }}\ln {{T}_{i}}-T_{i-1}^{\hat{\beta }}\ln {{T}_{i-1}}}{T_{i}^{\hat{\beta }}-T_{i-1}^{\hat{\beta }}}-\ln {{T}_{k}} \right]=0\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See [[Crow-AMSAA Confidence Bounds#Grouped_Data|Crow-AMSAA Confidence Bounds]] for details on how confidence bounds for grouped data are calculated.&lt;br /&gt;
&lt;br /&gt;
===Chi-Squared Test===&lt;br /&gt;
A chi-squared goodness-of-fit test is used to test the null hypothesis that the Crow-AMSAA reliability model adequately represents a set of grouped data. This test is applied only when the data is grouped. The expected number of failures in the interval from &amp;lt;math&amp;gt;{{T}_{i-1}}\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; is approximated by: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{\hat{\theta }}_{i}}=\hat{\lambda }\left( T_{i}^{{\hat{\beta }}}-T_{i-1}^{{\hat{\beta }}} \right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For each interval, &amp;lt;math&amp;gt;{{\hat{\theta }}_{i}}\,\!&amp;lt;/math&amp;gt; shall not be less than 5 and, if necessary, adjacent intervals may have to be combined so that the expected number of failures in any combined interval is at least 5. Let the number of intervals after this recombination be &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt;, and let the observed number of failures in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; new interval be &amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt;. Finally, let the expected number of failures in the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; new interval be &amp;lt;math&amp;gt;{{\hat{\theta }}_{i}}\,\!&amp;lt;/math&amp;gt;. Then the following statistic is approximately distributed as a chi-squared random variable with degrees of freedom &amp;lt;math&amp;gt;d-2\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{\chi }^{2}}=\underset{i=1}{\overset{d}{\mathop \sum }}\,\frac{{{({{N}_{i}}-{{\hat{\theta }}_{i}})}^{2}}}{{{\hat{\theta }}_{i}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The null hypothesis is rejected if the &amp;lt;math&amp;gt;{{\chi }^{2}}\,\!&amp;lt;/math&amp;gt; statistic exceeds the critical value for a chosen significance level. In this case, the hypothesis that the Crow-AMSAA model adequately fits the grouped data shall be rejected. Critical values for this statistic can be found in chi-squared distribution tables.&lt;br /&gt;
&lt;br /&gt;
===Grouped Data Examples===&lt;br /&gt;
====Example - Simple Grouped====&lt;br /&gt;
{{:Crow-AMSAA_Model_-_Grouped_Data_Example}}&lt;br /&gt;
&lt;br /&gt;
====Example - Helicopter System====&lt;br /&gt;
{{:Crow-AMSAA_Model_-_Helicopter_System_Example}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;noprint&amp;quot;&amp;gt;&lt;br /&gt;
{{Examples Box|RGA Examples|&amp;lt;p&amp;gt;More grouped data examples are available! See also:&amp;lt;/p&amp;gt; &lt;br /&gt;
{{Examples Link External|http://www.reliasoft.com/rga/examples/rgex1/index.htm|Simple MTBF Determination}}&amp;lt;nowiki/&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- ==Goodness-of-Fit Tests== This section is no longer necessary--&amp;gt;&lt;br /&gt;
&amp;lt;!-- {{:Goodness-of-Fit Tests}} This section is no longer necessary--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Missing Data==&lt;br /&gt;
{{:Gap Analysis}}&lt;br /&gt;
&lt;br /&gt;
==Discrete Data==&lt;br /&gt;
&lt;br /&gt;
The Crow-AMSAA model can be adapted for the analysis of &#039;&#039;success/failure&#039;&#039; data (also called &#039;&#039;discrete&#039;&#039; or &#039;&#039;attribute&#039;&#039; data). The following discrete data types are available: &lt;br /&gt;
&lt;br /&gt;
*Sequential &lt;br /&gt;
*Grouped per Configuration &lt;br /&gt;
*Mixed&lt;br /&gt;
&lt;br /&gt;
Sequential data and Grouped per Configuration are very similar as the parameter estimation methodology is the same for both data types. Mixed data is a combination of Sequential Data and Grouped per Configuration and is presented in [[Crow-AMSAA (NHPP)#Mixed_Data|Mixed Data]]. &lt;br /&gt;
&lt;br /&gt;
===Grouped per Configuration===&lt;br /&gt;
Suppose system development is represented by &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; configurations. This corresponds to &amp;lt;math&amp;gt;i-1\,\!&amp;lt;/math&amp;gt; configuration changes, unless fixes are applied at the end of the test phase, in which case there would be &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; configuration changes. Let &amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; be the number of trials during configuration &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;{{M}_{i}}\,\!&amp;lt;/math&amp;gt; be the number of failures during configuration &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;. Then the cumulative number of trials through configuration &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, namely &amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt;, is the sum of the &amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, or: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{T}_{i}}=\underset{}{\overset{}{\mathop \sum }}\,{{N}_{i}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the cumulative number of failures through configuration &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, namely &amp;lt;math&amp;gt;{{K}_{i}}\,\!&amp;lt;/math&amp;gt;, is the sum of the &amp;lt;math&amp;gt;{{M}_{i}}\,\!&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, or: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{K}_{i}}=\underset{}{\overset{}{\mathop \sum }}\,{{M}_{i}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The expected value of &amp;lt;math&amp;gt;{{K}_{i}}\,\!&amp;lt;/math&amp;gt; can be expressed as &amp;lt;math&amp;gt;E[{{K}_{i}}]\,\!&amp;lt;/math&amp;gt; and defined as the expected number of failures by the end of configuration &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;. Applying the learning curve property to &amp;lt;math&amp;gt;E[{{K}_{i}}]\,\!&amp;lt;/math&amp;gt; implies: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;E\left[ {{K}_{i}} \right]=\lambda T_{i}^{\beta }\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Denote &amp;lt;math&amp;gt;{{f}_{1}}\,\!&amp;lt;/math&amp;gt; as the probability of failure for configuration 1 and use it to develop a generalized equation for &amp;lt;math&amp;gt;{{f}_{i}}\,\!&amp;lt;/math&amp;gt; in terms of the &amp;lt;math&amp;gt;{{T}_{i}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{N}_{i}}\,\!&amp;lt;/math&amp;gt;. From the equation above, the expected number of failures by the end of configuration 1 is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;E\left[ {{K}_{1}} \right]=\lambda T_{1}^{\beta }={{f}_{1}}{{N}_{1}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\therefore {{f}_{1}}=\frac{\lambda T_{1}^{\beta }}{{{N}_{1}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Applying the &amp;lt;math&amp;gt;E\left[ {{K}_{i}}\right]\,\!&amp;lt;/math&amp;gt; equation again and noting that the expected number of failures by the end of configuration 2 is the sum of the expected number of failures in configuration 1 and the expected number of failures in configuration 2: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   E\left[ {{K}_{2}} \right]  = &amp;amp; \lambda T_{2}^{\beta } \\ &lt;br /&gt;
  = &amp;amp; {{f}_{1}}{{N}_{1}}+{{f}_{2}}{{N}_{2}} \\ &lt;br /&gt;
  = &amp;amp; \lambda T_{1}^{\beta }+{{f}_{2}}{{N}_{2}}  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\therefore {{f}_{2}}=\frac{\lambda T_{2}^{\beta }-\lambda T_{1}^{\beta }}{{{N}_{2}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By this method of inductive reasoning, a generalized equation for the failure probability on a configuration basis, &amp;lt;math&amp;gt;{{f}_{i}}\,\!&amp;lt;/math&amp;gt;, is obtained, such that: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{f}_{i}}=\frac{\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta }}{{{N}_{i}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this equation, &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; represents the trial number. Thus, an equation for the reliability (probability of success) for the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; configuration is obtained: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{i}}=1-{{f}_{i}}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sequential Data===&lt;br /&gt;
From the [[Crow-AMSAA (NHPP)#Grouped_per_Configuration|Grouped per Configuration]] section, the following equation is given: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{f}_{i}}=\frac{\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta }}{{{N}_{i}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the special case where &amp;lt;math&amp;gt;{{N}_{i}}=1\,\!&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;, the equation above becomes a smooth curve, &amp;lt;math&amp;gt;{{g}_{i}}\,\!&amp;lt;/math&amp;gt;, that represents the probability of failure for trial by trial data, or: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{g}_{i}}=\lambda \cdot {{i}^{\beta }}-\lambda \cdot {{\left( i-1 \right)}^{\beta }}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;{{N}_{i}}=1\,\!&amp;lt;/math&amp;gt;, this is the same as Sequential Data where systems are tested on a trial-by-trial basis. The equation for the reliability for the &amp;lt;math&amp;gt;{{i}^{th}}\,\!&amp;lt;/math&amp;gt; trial is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
{{R}_{i}}=1-{{g}_{i}}&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Parameter Estimation for Discrete Data===&amp;lt;!-- THIS SECTION HEADER IS LINKED FROM ANOTHER SECTION IN THIS PAGE. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK. --&amp;gt;&lt;br /&gt;
This section describes procedures for estimating the parameters of the Crow-AMSAA model for success/failure data which includes Sequential data and Grouped per Configuration. An example is presented illustrating these concepts. The estimation procedures provide maximum likelihood estimates (MLEs) for the model&#039;s two parameters, &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt;. The MLEs for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; allow for point estimates for the probability of failure, given by: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{\hat{f}}_{i}}=\frac{\hat{\lambda }T_{i}^{{\hat{\beta }}}-\hat{\lambda }T_{i-1}^{{\hat{\beta }}}}{{{N}_{i}}}=\frac{\hat{\lambda }\left( T_{i}^{{\hat{\beta }}}-T_{i-1}^{{\hat{\beta }}} \right)}{{{N}_{i}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the probability of success (reliability) for each configuration &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; is equal to: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;{{\hat{R}}_{i}}=1-{{\hat{f}}_{i}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The likelihood function is: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\underset{i=1}{\overset{k}{\mathop \prod }}\,\left( \begin{matrix}&lt;br /&gt;
   {{N}_{i}}  \\&lt;br /&gt;
   {{M}_{i}}  \\&lt;br /&gt;
\end{matrix} \right){{\left( \frac{\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta }}{{{N}_{i}}} \right)}^{{{M}_{i}}}}{{\left( \frac{{{N}_{i}}-\lambda T_{i}^{\beta }+\lambda T_{i-1}^{\beta }}{{{N}_{i}}} \right)}^{{{N}_{i}}-{{M}_{i}}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Taking the natural log on both sides yields: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \Lambda = &amp;amp; \underset{i=1}{\overset{K}{\mathop \sum }}\,\left[ \ln \left( \begin{matrix}&lt;br /&gt;
   {{N}_{i}}  \\&lt;br /&gt;
   {{M}_{i}}  \\&lt;br /&gt;
\end{matrix} \right)+{{M}_{i}}\left[ \ln (\lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta })-\ln {{N}_{i}} \right] \right] \\ &lt;br /&gt;
 &amp;amp;  &amp;amp; +\underset{i=1}{\overset{K}{\mathop \sum }}\,\left[ ({{N}_{i}}-{{M}_{i}})\left[ \ln ({{N}_{i}}-\lambda T_{i}^{\beta }+\lambda T_{i-1}^{\beta })-\ln {{N}_{i}} \right] \right]  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Taking the derivative with respect to &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; respectively, exact MLEs for &amp;lt;math&amp;gt;\lambda \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta \,\!&amp;lt;/math&amp;gt; are values satisfying the following two equations: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  &amp;amp; \underset{i=1}{\overset{K}{\mathop \sum }}\,{{H}_{i}}\times {{S}_{i}}= &amp;amp; 0 \\ &lt;br /&gt;
 &amp;amp; \underset{i=1}{\overset{K}{\mathop \sum }}\,{{U}_{i}}\times {{S}_{i}}= &amp;amp; 0  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where: &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
   {{H}_{i}}= &amp;amp; \left[ T_{i}^{\beta }\ln {{T}_{i}}-T_{i-1}^{\beta }\ln {{T}_{i-1}} \right] \\ &lt;br /&gt;
  {{S}_{i}}= &amp;amp; \frac{{{M}_{i}}}{\left[ \lambda T_{i}^{\beta }-\lambda T_{i-1}^{\beta } \right]}-\frac{{{N}_{i}}-{{M}_{i}}}{\left[ {{N}_{i}}-\lambda T_{i}^{\beta }+\lambda T_{i-1}^{\beta } \right]} \\ &lt;br /&gt;
  {{U}_{i}}= &amp;amp; T_{i}^{\beta }-T_{i-1}^{\beta }\,  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Example - Grouped per Configuration===&lt;br /&gt;
{{:Crow-AMSAA Discrete Model Example}}&lt;br /&gt;
&lt;br /&gt;
===Mixed Data===&lt;br /&gt;
The Mixed data type provides additional flexibility in terms of how it can handle different testing strategies. Systems can be tested using different configurations in groups or individual trial by trial, or a mixed combination of individual trials and configurations of more than one trial. The Mixed data type allows you to enter the data so that it represents how the systems were tested within the total number of trials. For example, if you launched five (5) missiles for a given configuration and none of them failed during testing, then there would be a row within the data sheet indicating that this configuration operated successfully for these five trials. If the very next trial, the sixth, failed then this would be a separate row within the data. The flexibility with the data entry allows for a greater understanding in terms of how the systems were tested by simply examining the data. The methodology for estimating the parameters &amp;lt;math&amp;gt;\hat{\beta }\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\hat{\lambda}\,\!&amp;lt;/math&amp;gt; are the same as those presented in the [[Crow-AMSAA (NHPP)#Grouped_Data|Grouped Data]] section. With Mixed data, the average reliability and average unreliability within a given interval can also be calculated.&lt;br /&gt;
&lt;br /&gt;
The average unreliability is calculated as:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\text{Average Unreliability }({{t}_{1,}}{{t}_{2}})=\frac{\lambda t_{2}^{\beta }-\lambda t_{1}^{\beta }}{{{t}_{2}}-{{t}_{1}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the average reliability is calculated as:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\text{Average Reliability }({{t}_{1,}}{{t}_{2}})=1-\frac{\lambda t_{2}^{\beta }-\lambda t_{1}^{\beta }}{{{t}_{2}}-{{t}_{1}}}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Mixed Data Confidence Bounds====&lt;br /&gt;
&#039;&#039;&#039;Bounds on Average Failure Probability&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
The process to calculate the average unreliability confidence bounds for Mixed data is as follows: &lt;br /&gt;
&lt;br /&gt;
#Calculate the average failure probability &amp;lt;math&amp;gt;({{t}_{1}},{{t}_{2}})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
#There will exist a &amp;lt;math&amp;gt;{{t}^{*}}\,\!&amp;lt;/math&amp;gt; between &amp;lt;math&amp;gt;{{t}_{1}}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;{{t}_{2}}\,\!&amp;lt;/math&amp;gt; such that the instantaneous unreliability at &amp;lt;math&amp;gt;{{t}^{*}}\,\!&amp;lt;/math&amp;gt; equals the average unreliability &amp;lt;math&amp;gt;({{t}_{1}},{{t}_{2}})\,\!&amp;lt;/math&amp;gt;. The confidence intervals for the instantaneous unreliability at &amp;lt;math&amp;gt;{{t}^{*}}\,\!&amp;lt;/math&amp;gt; are the confidence intervals for the average unreliability &amp;lt;math&amp;gt;({{t}_{1}},{{t}_{2}})\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Bounds on Average Reliability&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
The process to calculate the average reliability confidence bounds for Mixed data is as follows:&lt;br /&gt;
&lt;br /&gt;
#Calculate confidence bounds for average unreliability &amp;lt;math&amp;gt;({{t}_{1}},{{t}_{2}})\,\!&amp;lt;/math&amp;gt; as described above.&lt;br /&gt;
#The confidence bounds for reliability are 1 minus these confidence bounds for average unreliability.&lt;br /&gt;
&lt;br /&gt;
====Example - Mixed Data====&lt;br /&gt;
{{:Crow-AMSAA Discrete Model Grouped Data Example}}&lt;br /&gt;
&lt;br /&gt;
==Change of Slope==&lt;br /&gt;
{{:Change of Slope Analysis}}&lt;br /&gt;
&lt;br /&gt;
==More Examples==&lt;br /&gt;
===Determining Whether a Design Meets the MTBF Goal===&lt;br /&gt;
{{:Failure_Times_Crow-AMSAA_Example}}&lt;br /&gt;
&lt;br /&gt;
===Analyzing Mixed Data for a One-Shot System===&lt;br /&gt;
{{:Mixed_Data_-_Crow-AMSAA_Example}}&lt;/div&gt;</summary>
		<author><name>Sam Eisenberg</name></author>
	</entry>
	<entry>
		<id>https://www.reliawiki.com/index.php?title=Fault_Tree_Diagrams_and_System_Analysis&amp;diff=65540</id>
		<title>Fault Tree Diagrams and System Analysis</title>
		<link rel="alternate" type="text/html" href="https://www.reliawiki.com/index.php?title=Fault_Tree_Diagrams_and_System_Analysis&amp;diff=65540"/>
		<updated>2019-11-27T21:25:00Z</updated>

		<summary type="html">&lt;p&gt;Sam Eisenberg: /* Trigger Event */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:bsbook|9}}&lt;br /&gt;
BlockSim allows system modeling using both reliability block diagrams (RBDs) and fault trees. This chapter introduces basic fault tree analysis and points out the similarities (and differences) between RBDs and fault tree diagrams. Principles, methods and concepts discussed in previous chapters are used. &lt;br /&gt;
&lt;br /&gt;
Fault trees and reliability block diagrams are both symbolic analytical logic techniques that can be applied to analyze system reliability and related characteristics.  Although the symbols and structures of the two diagram types differ, most of the logical constructs in a fault tree diagram (FTD) can also be modeled with a reliability block diagram (RBD). This chapter presents a brief introduction to fault tree analysis concepts and illustrates the similarities between fault tree diagrams and reliability block diagrams.&lt;br /&gt;
&lt;br /&gt;
=Fault Tree Analysis: Brief Introduction=&lt;br /&gt;
Bell Telephone Laboratories developed the concept of fault tree analysis in 1962 for the U.S. Air Force for use with the Minuteman system.  It was later adopted and extensively applied by the Boeing Company.  A fault tree diagram follows a top-down structure and represents a graphical model of the pathways within a system that can lead to a foreseeable, undesirable loss event (or a failure).  The pathways interconnect contributory events and conditions using standard logic symbols (AND, OR, etc.).&lt;br /&gt;
&lt;br /&gt;
Fault tree diagrams consist of gates and events connected with lines.  The AND and OR gates are the two most commonly used gates in a fault tree.  To illustrate the use of these gates, consider two events (called &amp;quot;input events&amp;quot;) that can lead to another event (called the &amp;quot;output event&amp;quot;). If the occurrence of either input event causes the output event to occur, then these input events are connected using an OR gate.  Alternatively, if both input events must occur in order for the output event to occur, then they are connected by an AND gate.  The following figure shows a simple fault tree diagram in which either &#039;&#039;A&#039;&#039; or &#039;&#039;B&#039;&#039; must occur in order for the output event to occur.  In this diagram, the two events are connected to an OR gate.  If the output event is system failure and the two input events are component failures, then this fault tree indicates that the failure of &#039;&#039;A&#039;&#039; or &#039;&#039;B&#039;&#039;  causes the system to fail.  &lt;br /&gt;
&lt;br /&gt;
[[Image:1.png|center|200px|Fault tree where the occurrence of either &#039;&#039;A&#039;&#039; or &#039;&#039;B&#039;&#039; can cause system failure.|link=]]&lt;br /&gt;
&lt;br /&gt;
The RBD equivalent for this configuration is a simple series system with two blocks, &#039;&#039;A&#039;&#039; and &#039;&#039;B&#039;&#039;, as shown next.&lt;br /&gt;
&lt;br /&gt;
[[Image:10.2.png|center|200px|The RBD representation of the fault tree.|link=]]&lt;br /&gt;
&lt;br /&gt;
=Basic Gates=&lt;br /&gt;
Gates are the logic symbols that interconnect contributory events and conditions in a fault tree diagram.  The AND and OR gates described above, as well as a Voting OR gate in which the output event occurs if a certain number of the input events occur (i.e., &#039;&#039;k&#039;&#039;-out-of-&#039;&#039;n&#039;&#039; redundancy), are the most basic types of gates in classical fault tree analysis.   These gates are explicitly provided for in BlockSim and are described in this section along with their BlockSim implementations.  Additional gates are introduced in the following sections.  &lt;br /&gt;
&lt;br /&gt;
A fault tree diagram is always drawn in a top-down manner with lowest item being a basic event block.  Classical fault tree gates have no properties (i.e., they cannot fail).&lt;br /&gt;
&lt;br /&gt;
{{:AND Gate Example}}&lt;br /&gt;
&lt;br /&gt;
{{:OR Gate Example}}&lt;br /&gt;
&lt;br /&gt;
{{:Voting OR Gate Example}}&lt;br /&gt;
&lt;br /&gt;
===Combining Basic Gates===&lt;br /&gt;
As in reliability block diagrams where different configuration types can be combined in the same diagram, fault tree analysis gates can also be combined to create more complex representations.  As an example, consider the fault tree diagram shown in the figures below.  &lt;br /&gt;
&lt;br /&gt;
[[Image:10.5.png|center|600px|A sample FTD using different gates.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:10.6.png|center|250px|RBD representation of the FTD shown in figure above.|link=]]&lt;br /&gt;
&lt;br /&gt;
=New BlockSim Gates=&lt;br /&gt;
&lt;br /&gt;
In addition to the gates defined above, other gates exist in classical FTA.  These additional gates (e.g., Sequence Enforcing, Priority AND, etc.) are usually used to describe more complex redundancy configurations and are described in later sections.  First, we will introduce two new advanced gates that can be used to append to and/or replace classical fault tree gates.  &lt;br /&gt;
These two new gates are the Load Sharing and Standby gates.  Classical fault trees (or any other fault tree standard to our knowledge) do not allow for load sharing redundancy (or event dependency).  To overcome this limitation, and to provide fault trees with the same flexibility as BlockSim&#039;s RBDs, we will define a Load Sharing gate in this section.  Additionally, traditional fault trees do not provide the full capability to model standby redundancy configurations (including the quiescent failure distribution), although basic standby can be represented in traditional fault tree diagrams using a Priority AND gate or a Sequence Enforcing gate, discussed in later sections.  &lt;br /&gt;
&lt;br /&gt;
===Load Sharing Gate===&lt;br /&gt;
&lt;br /&gt;
[[Image:I10.10.png|center|100px|link=|]]&lt;br /&gt;
&lt;br /&gt;
A Load Sharing gate behaves just like BlockSim&#039;s Load Sharing containers for RBDs.  Load Sharing containers were discussed in [[Time-Dependent_System_Reliability_(Analytical)|Time-Dependent System Reliability (Analytical)]] and [[RBDs_and_Analytical_System_Reliability|RBDs and Analytical System Reliability]].  Events leading into a Load Sharing gate have distributions and life-stress relationships, just like contained blocks.  Furthermore, the gate defines the load and the number required to cause the output event (i.e., the Load Sharing gate is defined with a &#039;&#039;k&#039;&#039;-out-of-&#039;&#039;n&#039;&#039; vote ).  In BlockSim, no additional gates are allowed below a Load Sharing gate.&lt;br /&gt;
&lt;br /&gt;
====Example====&lt;br /&gt;
{{:Example_Using_Load_Sharing_Gates_in_Fault_Trees}}&lt;br /&gt;
&lt;br /&gt;
{{:Standby Gate Example}}&lt;br /&gt;
&lt;br /&gt;
=Additional Classical Gates and Their Equivalents in BlockSim=&lt;br /&gt;
===Sequence Enforcing Gate===&lt;br /&gt;
Various graphical symbols have been used to represent a Sequence Enforcing gate.  It is a variation of an AND gate in which each item must happen in sequence.  In other words, events are constrained to occur in a specific sequence and the output event occurs if all input events occur in that specified sequence.  This is identical to a cold standby redundant configuration (i.e., &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; units in standby with no quiescent failure distribution and no switch failure probability).  BlockSim does not explicitly provide a Sequence Enforcing gate; however, it can be easily modeled using the more advanced Standby gate, described previously.  &lt;br /&gt;
&lt;br /&gt;
===Inhibit Gate===&lt;br /&gt;
In an Inhibit gate, the output event occurs if all input events occur and an additional conditional event occurs.  It is an AND gate with an additional event.  In reality, an Inhibit gate provides no additional modeling capabilities but is used to illustrate the fact that an additional event must also occur.  As an example, consider the case where events &#039;&#039;A&#039;&#039; and &#039;&#039;B&#039;&#039; must occur as well as a third event &#039;&#039;C&#039;&#039; (the so-called conditional event) in order for the system to fail.  One can represent this in a fault tree by using an AND gate with three events, &#039;&#039;A&#039;&#039;, &#039;&#039;B&#039;&#039; and &#039;&#039;C&#039;&#039;, as shown next.  &lt;br /&gt;
&lt;br /&gt;
[[Image:inhibit_and.png|center|200px|Using an AND gate to represent an inhibit relationship.|link=]]&lt;br /&gt;
&lt;br /&gt;
Classical fault tree diagrams have the conditional event drawn to the side and the gate drawn as a hexagon, as shown next.  &lt;br /&gt;
&lt;br /&gt;
[[Image:11.png|center|200px|Traditional use of an Inhibit gate.|link=]]&lt;br /&gt;
&lt;br /&gt;
It should be noted that both representations are equivalent from an analysis standpoint.&lt;br /&gt;
&lt;br /&gt;
BlockSim explicitly provides an Inhibit gate.  This gate functions just like an AND gate with the exception that failure/repair characteristics can be assigned to the gate itself.  This allows the construction shown above (if the gate itself is set to not fail).  Additionally, one could encapsulate event &#039;&#039;C&#039;&#039; inside the gate (since the gate can have properties), as shown next.  Note that all three figures can be represented using a single RBD with events &#039;&#039;A&#039;&#039;, &#039;&#039;B&#039;&#039; and &#039;&#039;C&#039;&#039; in parallel.&lt;br /&gt;
&lt;br /&gt;
[[Image:10.12.png|center|200px|Including the conditional event inside the Inhibit gate.|link=]]&lt;br /&gt;
&lt;br /&gt;
===Priority AND Gate===&lt;br /&gt;
&lt;br /&gt;
[[Image:10_13.png|center|100px|link=|]]&lt;br /&gt;
&lt;br /&gt;
With a Priority AND gate, the output event occurs if all input events occur in a specific sequence.  This is an AND gate that requires that all events occur in a specific sequence.  At first, this may seem identical to the Sequence Enforcing gate discussed earlier.  However, it differs from this gate in the fact that events can occur out of sequence (i.e., are not constrained to occur in a specific sequence) but the output event only occurs if the sequence is followed.  To better illustrate this, consider the case of two motors in standby configuration with motor &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; being the primary motor and motor &#039;&#039;B&#039;&#039; in standby.  If motor &#039;&#039;A&#039;&#039; fails, then the switch (which can also fail) activates motor &#039;&#039;B&#039;&#039;.  Then the system will fail if motor &#039;&#039;A&#039;&#039; fails and the switch fails to switch, or if the switch succeeds but motor &#039;&#039;B&#039;&#039; fails subsequent to the switching action.  In this scenario, the events must occur in the order noted; however, it is possible for the switch or motor &#039;&#039;B&#039;&#039; to fail (in a quiescent mode) without causing a system failure, if &#039;&#039;A&#039;&#039; never fails.  BlockSim does not explicitly provide a Priority AND gate.  However, like the Sequence Enforcing gate, it can be easily modeled using the more advanced Standby gate.&lt;br /&gt;
&lt;br /&gt;
===Transfer Gate===&lt;br /&gt;
[[Image:10_14.png|center|100px|link=|]]&lt;br /&gt;
&lt;br /&gt;
Transfer in/out gates are used to indicate a transfer/continuation of one fault tree to another.  In classical fault trees, the Transfer gate is generally used to signify the continuation of a tree on a separate sheet.  This is the same as a subdiagram block in an RBD.  BlockSim does not explicitly provide a Transfer gate.  However, it does allow for subdiagrams (or sub-trees), which provide for greater flexibility.  Additionally, a subdiagram in a BlockSim fault tree can be an RBD and vice versa.  BlockSim uses the more intuitive folder symbol to represent subdiagrams.&lt;br /&gt;
&lt;br /&gt;
[[Image:10_15.png|center|100px|link=|]]&lt;br /&gt;
&lt;br /&gt;
As an example, consider the fault tree of the robot manipulator shown in the first figure (&amp;quot;A&amp;quot;) below. The second figure (&amp;quot;B&amp;quot;) illustrates the same fault tree with the use of subdiagrams (Transfer gates).  The referenced subdiagrams are shown in subsequent figures.  Note that this is using multiple levels of indenture (i.e., the subdiagram has subdiagrams and so forth).&lt;br /&gt;
&lt;br /&gt;
[[Image:BS10.13.png|center|600px|thumb|A: A sample fault tree for a robot manipulator, showing all items in a single tree.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:14.png|center|450px|thumb|B: The fault tree of Figure A using subdiagrams (Transfer gates). The subdiagrams are shown in Figures &amp;quot;C&amp;quot; and &amp;quot;D&amp;quot;.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:15.png|center|250px|thumb|C: The fault tree of the robot arm mechanism. This subdiagram is referenced in Figure &amp;quot;B&amp;quot;.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:10_16.png|center|400px|thumb|D: The fault tree for the arm jams/collides event. This subdiagram is referenced in Figure &amp;quot;B&amp;quot;. It also includes a subdiagram continuation to Figure &amp;quot;E&amp;quot;.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:10_17.png|center|400px|thumb|E: The brake shutdown event referenced from Figure &amp;quot;D&amp;quot;. it also includes a subdiagram continuation to Figure &amp;quot;F&amp;quot;.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:10_18.png|center|350px|thumb|F: The watchdog ESD fails event referenced from Figure &amp;quot;F&amp;quot;. It also includes a subdiagram continuation to Figure &amp;quot;G&amp;quot;.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:10_19.png|center|400px|thumb|G: The communication fails event referenced from Figure &amp;quot;F&amp;quot;.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The RBD representation of the fault tree shown in the first figure is given in Figure &amp;quot;H&amp;quot;.  This same RBD could have been represented using subdiagrams, as shown in Figure &amp;quot;I&amp;quot;.  In this figure, which is the RBD representation of Figure &amp;quot;B&amp;quot;, the subdiagrams in the RBD link to the fault trees of Figures &amp;quot;D&amp;quot; and &amp;quot;C&amp;quot; and their sub-trees.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:10_20.png|center|500px|thumb|H: This is the RBD equivalent of the complete fault tree of Figure &amp;quot;A&amp;quot;.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:10_21.png|center|350px|thumb|I: The RBD representation of Figure &amp;quot;B&amp;quot; with the subdiagrams in the RBD linked to the fault trees of Figures &amp;quot;C&amp;quot; and &amp;quot;D&amp;quot; and their sub-trees.|link=]]&lt;br /&gt;
&lt;br /&gt;
===XOR Gate===&lt;br /&gt;
&lt;br /&gt;
[[Image:16.png|center|100px|link=]]&lt;br /&gt;
&lt;br /&gt;
In an XOR gate, the output event occurs if exactly one input event occurs.  This is similar to an OR gate with the exception that if more than one input event occurs then the output event does not occur. For example, if there are two input events then the XOR gate indicates that the output event occurs if only one of the input events occurs but not if zero or both of these events occur.  From a system reliability perspective, this would imply that a two-component system would function even if both components had failed.  Furthermore, when dealing with time-varying failure distributions, and if system components do not operate through failure, a failure occurrence of both components at the exact same time ( &amp;lt;math&amp;gt;dt)\,\!&amp;lt;/math&amp;gt; is an unreachable state; thus an OR gate would suffice.  For these reasons, an RBD equivalent of an XOR gate is not presented here and BlockSim does not explicitly provide an XOR gate.&lt;br /&gt;
&lt;br /&gt;
=Event Classifications=&lt;br /&gt;
Traditional fault trees use different shapes to represent different events.  Unlike gates, however, different events in a fault tree are not treated differently from an analytical perspective.  Rather, the event shapes are used to convey additional information visually.  BlockSim includes some of the main event symbols from classical fault tree analysis and provides utilities for changing the graphical look of a block to illustrate a different type of event.  Some of these event classifications are given next.  From a properties perspective, all events defined in BlockSim can have fixed probabilities, failure distributions, repair distributions, crews, spares, etc.  In other words, fault tree event blocks can have all the properties that an RBD block can have.  This is an enhancement and a significant expansion over traditional fault trees, which generally include just a fixed probability of occurrence and/or a constant failure rate.&lt;br /&gt;
&lt;br /&gt;
===Basic Event===&lt;br /&gt;
&lt;br /&gt;
[[Image:5.png|center|100px|link=]]&lt;br /&gt;
&lt;br /&gt;
A basic event (or failure event) is identical to an RBD block and has been traditionally represented by a circle.&lt;br /&gt;
&lt;br /&gt;
===Undeveloped Event===&lt;br /&gt;
&lt;br /&gt;
[[Image:diamond.png|center|100px|link=]]&lt;br /&gt;
&lt;br /&gt;
An undeveloped event has the same properties as a basic event with the exception that it is graphically rendered as a diamond.  The diamond representation graphically illustrates that this event could have been expanded into a separate fault tree but was not.  In other words, the analyst uses a different symbol to convey that the event could have been developed (broken down) further but he/she has chosen not to do so for the analysis.&lt;br /&gt;
&lt;br /&gt;
===Trigger Event===&lt;br /&gt;
[[Image:pentagon.png|center|100px|link=]]&lt;br /&gt;
&lt;br /&gt;
A trigger event is an event that can be set to occur or not occur (i.e., it usually has a fixed probability of 0 or 1).  It is usually used to turn paths on or off or to make paths of a tree functional or non-functional.  Furthermore, the terms failed house and working house have been used to signify probabilities of 0 and 1 respectively.  In BlockSim, a house shape is available for an event and a house-shaped event has the same properties as a basic event, keeping in mind that an event can be set to Cannot Fail or Failed from the block properties window.&lt;br /&gt;
&lt;br /&gt;
===Conditional Event===&lt;br /&gt;
[[Image:oval.png|center|100px|link=]]&lt;br /&gt;
&lt;br /&gt;
A conditional event is represented by an ellipse and specifies a condition.  Again, it has all the properties of a basic event.  It can be applied to any gate.  As an example, event &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; in the first figure below would be the conditional event and it would be represented more applicably by an ellipse than a circle, as shown in the second figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:10.10.png|center|200px|Using an AND gate to represent an inhibit relationship.|link=]]&lt;br /&gt;
&lt;br /&gt;
[[Image:10.22.png|center|200px|Using an ellipse attached to an inhubit gate (with no gate properties) to show the conditional event. This is mathematically equivalent to figure above.|link=]]&lt;br /&gt;
&lt;br /&gt;
=Comparing Fault Trees and RBDs=&lt;br /&gt;
The most fundamental difference between fault tree diagrams and reliability block diagrams is that you work in the success space in an RBD while you work in the failure space in a fault tree.  In other words, the RBD considers success combinations while the fault tree considers failure combinations.  In addition, fault trees have traditionally been used to analyze fixed probabilities (i.e., each event that comprises the tree has a fixed probability of occurring) while RBDs may include time-varying distributions for the success (reliability equation) and other properties, such as repair/restoration distributions.  In general (and with some specific exceptions), a fault tree can be easily converted to an RBD.  However, it is generally more difficult to convert an RBD into a fault tree, especially if one allows for highly complex configurations.&lt;br /&gt;
&lt;br /&gt;
As you can see from the discussion to this point, an RBD equivalent exists for most of the constructs that are supported by classical fault tree analysis.  With these constructs, you can perform the same powerful system analysis, including simulation, regardless of how you choose to represent the system thus erasing the distinction between fault trees and reliability block diagrams.&lt;br /&gt;
&lt;br /&gt;
{{:Same Example Modeled with RBDs or Fault Trees}}&lt;br /&gt;
&lt;br /&gt;
=Using Mirrored Blocks to Represent Complex RBDs as FTDs=&lt;br /&gt;
A fault tree cannot normally represent a complex RBD.  As an example, consider the RBD shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:10.41.png|center|400px|A complex RBD that cannot be represented by a fault tree unless duplicate events are utilized.|link=]]&lt;br /&gt;
&lt;br /&gt;
A fault tree representation for this RBD is:&lt;br /&gt;
&lt;br /&gt;
[[Image:10.42.png|center|350px|A fault tree representation using mirrored blocks (events) of the complex RBD.|link=]]&lt;br /&gt;
&lt;br /&gt;
Note that the same event is used more than once in the fault tree diagram.  To correctly analyze this, the duplicate events need to be set up as &amp;quot;mirrored&amp;quot; events to the parent event.  In other words, the same event is represented in two locations in the fault tree diagram.  It should be pointed out that the RBD in the following figure is also equivalent to the RBD shown earlier and the fault tree of the figure shown above.&lt;br /&gt;
&lt;br /&gt;
[[Image:10.43.png|center|550px|An RBD using mirrored blocks that is equivalent to both the RBD and FTD.|link=]]&lt;br /&gt;
&lt;br /&gt;
=Fault Trees and Simulation=&lt;br /&gt;
&lt;br /&gt;
The slightly modified constructs in BlockSim erase the distinction between RBDs and fault trees.  Given this, any analysis that is possible in a BlockSim RBD (including [[Additional_Analyses#Throughput_Analysis|throughput analysis]]) is also available when using fault trees.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, consider the RBD shown in the first figure below and its equivalent fault tree representation, as shown in the second figure.  &lt;br /&gt;
&lt;br /&gt;
[[Image:10.44.png|center|450px|RBD for a repairable system.|link=]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:10.45.png|center|450px|Fault tree equivalent of the repairable system shown in figure above.|link=]]&lt;br /&gt;
&lt;br /&gt;
Furthermore, assume the following basic failure and repair properties for each block and event:&lt;br /&gt;
&lt;br /&gt;
:*Block &#039;&#039;A&#039;&#039;:&lt;br /&gt;
::o	Failure Distribution: Weibull; &amp;lt;math&amp;gt;\beta = 1/5\,\!&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\eta = 1,000\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
::o	Corrective Distribution: Weibull; &amp;lt;math&amp;gt;\beta = 1.5 \,\!&amp;lt;/math&amp;gt; ; &amp;lt;math&amp;gt;\eta = 100\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*Block &#039;&#039;B&#039;&#039;:&lt;br /&gt;
::o	Failure Distribution: Exponential; &amp;lt;math&amp;gt;\mu = 10,000 \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
::o	Corrective Distribution: Weibull; &amp;lt;math&amp;gt;\beta = 1.5 \,\!&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\eta = 20\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*Block &#039;&#039;C&#039;&#039;:&lt;br /&gt;
::o	Failure Distribution: Normal; &amp;lt;math&amp;gt;\mu = 1,000\,\!&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\sigma = 200\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
::o	Corrective Distribution: Normal; &amp;lt;math&amp;gt;\mu = 6\,\!&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\sigma = 2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*Block &#039;&#039;D&#039;&#039;:&lt;br /&gt;
::o	Failure Distribution: Weibull; &amp;lt;math&amp;gt;\beta = 1.5\,\!&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\eta = 10,000\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
::o	Corrective Distribution: Exponential; &amp;lt;math&amp;gt;\mu = 10\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*Block &#039;&#039;E&#039;&#039;:&lt;br /&gt;
::o	Failure Distribution: Weibull; &amp;lt;math&amp;gt;\beta = 3\,\!&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\eta = 1,000\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
::o	Corrective Distribution: Weibull; &amp;lt;math&amp;gt;\beta = 1.5\,\!&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\eta = 20\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*Block &#039;&#039;F&#039;&#039;:&lt;br /&gt;
::o	Failure Distribution: Weibull; &amp;lt;math&amp;gt;\beta = 1.5\,\!&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\eta = 5,000\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
::o	Corrective Distribution: Weibull; &amp;lt;math&amp;gt;\beta = 1.5\,\!&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\eta = 100\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*Block &#039;&#039;G&#039;&#039;:&lt;br /&gt;
::o	Failure Distribution: Exponential; &amp;lt;math&amp;gt;\mu = 100,000\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
::o	Corrective Distribution: Weibull; &amp;lt;math&amp;gt;\beta = 1.5\,\!&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\eta = 10\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
:*Block &#039;&#039;H&#039;&#039;:&lt;br /&gt;
::o	Failure Distribution: Normal; &amp;lt;math&amp;gt;\mu = 5,000\,\!&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\sigma = 50\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
::o	Corrective Distribution: Normal; &amp;lt;math&amp;gt;\mu = 10\,\!&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\sigma = 2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A sample table of simulation results is given next for up to &amp;lt;math&amp;gt;t=1,000\,\!&amp;lt;/math&amp;gt;, using &amp;lt;math&amp;gt;2,000\,\!&amp;lt;/math&amp;gt; simulations for each diagram and an identical seed.&lt;br /&gt;
&lt;br /&gt;
[[Image:FT and RBD example1.png|center|400px|link=|]]&lt;br /&gt;
&lt;br /&gt;
As expected, the results are equivalent (within an expected difference due to simulation) regardless of the diagram type used.  It should be pointed out that even though the same seed was used by both diagrams, the results are not always expected to be identical because the order in which the blocks are read from a fault tree diagram during the simulation may differ from the order in which they are read in the RBD; thus using a different random number stream for each block (e.g., block &#039;&#039;G&#039;&#039; in the RBD may receive a different sequence of random numbers than event block &#039;&#039;G&#039;&#039; in the FT).&lt;br /&gt;
&lt;br /&gt;
=Additional Fault Tree Topics=&lt;br /&gt;
&lt;br /&gt;
===Minimal Cut Sets===&lt;br /&gt;
&lt;br /&gt;
Traditional solution of fault trees involves the determination of so-called &#039;&#039;minimal cut sets&#039;&#039;.  Minimal cut sets are all the unique combinations of component failures that can cause system failure.  Specifically, a cut set is said to be a minimal cut set if, when any basic event is removed from the set, the remaining events collectively are no longer a cut set, as discussed in Kececioglu [[Appendix_B:_References | [10]]].  As an example, consider the fault tree shown in the figure below.  The system will fail if {1, 2, 3 and 4 fail} or {1, 2 and 3 fail} or {1, 2 and 4 fail}.&lt;br /&gt;
&lt;br /&gt;
[[Image:9.png|center|250px|Minimal cut set example.|link=]]&lt;br /&gt;
&lt;br /&gt;
All of these are cut sets.  However, the one including all components is not a minimal cut set because, if 3 and 4 are removed, the remaining events are also a cut set.  Therefore, the minimal cut sets for this configuration are {1, 2 , 3} or {1, 2, 4}.  This may be more evident by examining the RBD equivalent of the figure above, as shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:BS10.png|center|350px|RBD of the fault tree shown in figure above.|link=]]&lt;br /&gt;
&lt;br /&gt;
BlockSim does not use the cut sets methodology when analyzing fault trees. However, interested users can obtain these cut sets for both fault trees and block diagrams with the command available in the Analysis Ribbon.&lt;/div&gt;</summary>
		<author><name>Sam Eisenberg</name></author>
	</entry>
</feed>