Template:Eyring-weib bounds on parameters: Difference between revisions

From ReliaWiki
Jump to navigation Jump to search
Line 42: Line 42:
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial \beta } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial A} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}}  \\
   -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial \beta } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial A} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}}  \\
\end{matrix} \right]}^{-1}}</math>
\end{matrix} \right]}^{-1}}</math>
<br>

Revision as of 23:59, 27 February 2012

Bounds on the Parameters


From the asymptotically normal property of the maximum likelihood estimators, and since [math]\displaystyle{ \widehat{\beta } }[/math] is a positive parameter, [math]\displaystyle{ \ln (\widehat{\beta }) }[/math] can then be treated as normally distributed. After performing this transformation, the bounds on the parameters are estimated from:


[math]\displaystyle{ \begin{align} & {{\beta }_{U}}= & \widehat{\beta }\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}} \\ & {{\beta }_{L}}= & \widehat{\beta }\cdot {{e}^{-\tfrac{{{K}_{\alpha }}\sqrt{Var(\widehat{\beta })}}{\widehat{\beta }}}} \end{align} }[/math]


also:


[math]\displaystyle{ \begin{align} & {{A}_{U}}= & \widehat{A}+{{K}_{\alpha }}\sqrt{Var(\widehat{A})} \\ & {{A}_{L}}= & \widehat{A}-{{K}_{\alpha }}\sqrt{Var(\widehat{A})} \end{align} }[/math]


and:


[math]\displaystyle{ \begin{align} & {{B}_{U}}= & \widehat{B}+{{K}_{\alpha }}\sqrt{Var(\widehat{B})} \\ & {{B}_{L}}= & \widehat{B}-{{K}_{\alpha }}\sqrt{Var(\widehat{B})} \end{align} }[/math]



The variances and covariances of [math]\displaystyle{ \beta , }[/math] [math]\displaystyle{ A, }[/math] and [math]\displaystyle{ B }[/math] are estimated from the Fisher matrix (evaluated at [math]\displaystyle{ \widehat{\beta }, }[/math] [math]\displaystyle{ \widehat{A}, }[/math] [math]\displaystyle{ \widehat{B}) }[/math] as follows:


[math]\displaystyle{ \left[ \begin{matrix} Var(\widehat{\beta }) & Cov(\widehat{\beta },\widehat{A}) & Cov(\widehat{\beta },\widehat{B}) \\ Cov(\widehat{A},\widehat{\beta }) & Var(\widehat{A}) & Cov(\widehat{A},\widehat{B}) \\ Cov(\widehat{B},\widehat{\beta }) & Cov(\widehat{B},\widehat{A}) & Var(\widehat{B}) \\ \end{matrix} \right]={{\left[ \begin{matrix} -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\beta }^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial A} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \beta \partial B} \\ -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial \beta } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{A}^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial A\partial B} \\ -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial \beta } & -\tfrac{{{\partial }^{2}}\Lambda }{\partial B\partial A} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{B}^{2}}} \\ \end{matrix} \right]}^{-1}} }[/math]