Operational Mission Profile Testing

From ReliaWiki
Jump to navigation Jump to search

New format available! This reference is now available in a new format that offers faster page load, improved display for calculations and images, more targeted search and the latest content available as a PDF. As of September 2023, this Reliawiki page will not continue to be updated. Please update all links and bookmarks to the latest reference at help.reliasoft.com/reference/reliability_growth_and_repairable_system_analysis

Chapter 5: Operational Mission Profile Testing


RGAbox.png

Chapter 5  
Operational Mission Profile Testing  

Synthesis-icon.png

Available Software:
RGA

Examples icon.png

More Resources:
RGA examples

It is common practice for systems to be subjected to operational testing during a development program. The objective of this testing is to evaluate the performance of the system, including reliability, under conditions that represent actual use. Because of budget, resources, schedule and other considerations, these operational tests rarely match exactly the actual use conditions. Usually, stated mission profile conditions are used for operational testing. These mission profile conditions are typically general statements that guide testing on an average basis. For example, a copier might be required to print 3,000 pages by time T=10 days and 5,000 pages by time T=15 days. In addition the copier is required to scan 200 documents by time T=10 days, 250 documents by time T=15 days, etc.

Because of practical constraints, these full mission profile conditions are typically not repeated one after the other during testing. Instead, the elements that make up the mission profile conditions are tested under varying schedules with the intent that, on average, the mission profile conditions are met. In practice, reliability corrective actions are generally incorporated into the system as a result of this type of testing.

Because of a lack of structure for managing the elements that make up the mission profile, it is difficult to have an agreed upon methodology for estimating the system's reliability. Many systems fail operational testing because key assessments such as growth potential and projections cannot be made in a straightforward manner so that management can take appropriate action. The RGA software addresses this issue by incorporating a systematic mission profile methodology for operational reliability testing and reliability growth assessments.

Introduction

Operational testing is an attempt to subject the system to conditions close to the actual environment that is expected under customer use. Often this is an extension of reliability growth testing where operation induced failure modes and corrective actions are of prime interest. Sometimes the stated intent is for a demonstration test where corrective actions are not the prime objective. However, it is not unusual for a system to fail the demonstration test, and the management issue is what to do next. In both cases, important and valid key parameters are needed to properly assess this situation and make cost-effective and timely decisions. This is often difficult in practice.

For example, a system may be required to:

  • Conduct a specific task a fixed number times for each hour of operation (task 1).
  • Move a fixed number of miles under a specific operating condition for each hour of operation (task 2).
  • Move a fixed number of miles under another operating condition for each hour of operation (task 3).

During operational testing, these guidelines are met individually as averages. For example, the actual as-tested profile for task 1 may not be uniform relative to the stated mission guidelines during the testing. What is often the case is that some of the tasks (for example task 1) could be operated below the stated guidelines. This can mask a major reliability problem. In other cases during testing, tasks 1, 2 and 3 might never meet their stated averages, except perhaps at the end of the test. This becomes an issue because an important aspect of effective reliability risk management is to not wait until the end of the test to have an assessment of the reliability performance.

Because the elements of the mission profile during the testing will rarely, if ever, balance continuously to the stated averages, a common analysis method is to piece the reliability assessments together by evaluating each element of the profile separately. This is not a well-defined methodology and does not account for improvement during the testing. It is therefore not unusual for two separate organizations (e.g., the customer and the developer) to analyze the same data and obtain different MTBF numbers. In addition, this method does not address the delayed corrective actions that are to be incorporated at the end of the test nor does it estimate growth potential or interaction effects. Therefore, to reduce this risk there is a need for a rigorous methodology for reliability during operational testing that does not rely on piecewise analysis and avoids the issues noted above.

The RGA software incorporates a new methodology to manage system reliability during operational mission profile testing. This methodology draws information from particular plots of the operational test data and inserts key information into a growth model. The improved methodology does not piece the analysis together, but gives a direct MTBF mission profile estimate of the system's reliability that is directly compared to the MTBF requirement. The methodology will reflect any reliability growth improvement during the test, and will also give management a higher projected MTBF for the system mission profile reliability after delayed corrected actions are incorporated at the end of the test. In addition, the methodology also gives an estimate of the system's growth potential, and provides management metrics to evaluate whether changes in the program need to be made. A key advantage is that the methodology is well-defined and all organizations will arrive at the same reliability assessment with the same data.

Testing Methodology

The methodology described here will use the Crow extended model for data analysis. In order to have valid Crow extended model assessments, it is required that the operational mission profile be conducted in a structured manner. Therefore, this testing methodology involves convergence and stopping points during the testing. A stopping point is when the testing is stopped for the expressed purpose of incorporating the type BD delayed corrective actions. There may be more than one stopping point during a particular testing phase. For simplicity, the methodology with only one stopping point will be described; however, the methodology can be extended to the case of more than one stopping point. A convergence point is a time during the test when all the operational mission profile tasks meet their expected averages or fall within an acceptable range. At least three convergence points are required for a well-balanced test. The end of the test, time [math]\displaystyle{ T\,\! }[/math], must be a convergence point. The test times between the convergence points do not have to be the same.

The objective of having the convergence points is to be able to apply the Crow extended model directly in such a way that the projection and other key reliability growth parameters can be estimated in a valid fashion. To do this, the grouped data methodology is applied. Note that the methodology can also be used with the Crow-AMSAA (NHPP) model for a simpler analysis without the ability to estimate projected and growth potential reliability. See the Grouped Data for the Crow-AMSAA (NHPP) model or for the Crow extended model.

Example

New format available! This reference is now available in a new format that offers faster page load, improved display for calculations and images, more targeted search and the latest content available as a PDF. As of September 2023, this Reliawiki page will not continue to be updated. Please update all links and bookmarks to the latest reference at help.reliasoft.com/reference/reliability_growth_and_repairable_system_analysis

Chapter 5: Operational Mission Profile Testing


RGAbox.png

Chapter 5  
Operational Mission Profile Testing  

Synthesis-icon.png

Available Software:
RGA

Examples icon.png

More Resources:
RGA examples

It is common practice for systems to be subjected to operational testing during a development program. The objective of this testing is to evaluate the performance of the system, including reliability, under conditions that represent actual use. Because of budget, resources, schedule and other considerations, these operational tests rarely match exactly the actual use conditions. Usually, stated mission profile conditions are used for operational testing. These mission profile conditions are typically general statements that guide testing on an average basis. For example, a copier might be required to print 3,000 pages by time T=10 days and 5,000 pages by time T=15 days. In addition the copier is required to scan 200 documents by time T=10 days, 250 documents by time T=15 days, etc.

Because of practical constraints, these full mission profile conditions are typically not repeated one after the other during testing. Instead, the elements that make up the mission profile conditions are tested under varying schedules with the intent that, on average, the mission profile conditions are met. In practice, reliability corrective actions are generally incorporated into the system as a result of this type of testing.

Because of a lack of structure for managing the elements that make up the mission profile, it is difficult to have an agreed upon methodology for estimating the system's reliability. Many systems fail operational testing because key assessments such as growth potential and projections cannot be made in a straightforward manner so that management can take appropriate action. The RGA software addresses this issue by incorporating a systematic mission profile methodology for operational reliability testing and reliability growth assessments.

Introduction

Operational testing is an attempt to subject the system to conditions close to the actual environment that is expected under customer use. Often this is an extension of reliability growth testing where operation induced failure modes and corrective actions are of prime interest. Sometimes the stated intent is for a demonstration test where corrective actions are not the prime objective. However, it is not unusual for a system to fail the demonstration test, and the management issue is what to do next. In both cases, important and valid key parameters are needed to properly assess this situation and make cost-effective and timely decisions. This is often difficult in practice.

For example, a system may be required to:

  • Conduct a specific task a fixed number times for each hour of operation (task 1).
  • Move a fixed number of miles under a specific operating condition for each hour of operation (task 2).
  • Move a fixed number of miles under another operating condition for each hour of operation (task 3).

During operational testing, these guidelines are met individually as averages. For example, the actual as-tested profile for task 1 may not be uniform relative to the stated mission guidelines during the testing. What is often the case is that some of the tasks (for example task 1) could be operated below the stated guidelines. This can mask a major reliability problem. In other cases during testing, tasks 1, 2 and 3 might never meet their stated averages, except perhaps at the end of the test. This becomes an issue because an important aspect of effective reliability risk management is to not wait until the end of the test to have an assessment of the reliability performance.

Because the elements of the mission profile during the testing will rarely, if ever, balance continuously to the stated averages, a common analysis method is to piece the reliability assessments together by evaluating each element of the profile separately. This is not a well-defined methodology and does not account for improvement during the testing. It is therefore not unusual for two separate organizations (e.g., the customer and the developer) to analyze the same data and obtain different MTBF numbers. In addition, this method does not address the delayed corrective actions that are to be incorporated at the end of the test nor does it estimate growth potential or interaction effects. Therefore, to reduce this risk there is a need for a rigorous methodology for reliability during operational testing that does not rely on piecewise analysis and avoids the issues noted above.

The RGA software incorporates a new methodology to manage system reliability during operational mission profile testing. This methodology draws information from particular plots of the operational test data and inserts key information into a growth model. The improved methodology does not piece the analysis together, but gives a direct MTBF mission profile estimate of the system's reliability that is directly compared to the MTBF requirement. The methodology will reflect any reliability growth improvement during the test, and will also give management a higher projected MTBF for the system mission profile reliability after delayed corrected actions are incorporated at the end of the test. In addition, the methodology also gives an estimate of the system's growth potential, and provides management metrics to evaluate whether changes in the program need to be made. A key advantage is that the methodology is well-defined and all organizations will arrive at the same reliability assessment with the same data.

Testing Methodology

The methodology described here will use the Crow extended model for data analysis. In order to have valid Crow extended model assessments, it is required that the operational mission profile be conducted in a structured manner. Therefore, this testing methodology involves convergence and stopping points during the testing. A stopping point is when the testing is stopped for the expressed purpose of incorporating the type BD delayed corrective actions. There may be more than one stopping point during a particular testing phase. For simplicity, the methodology with only one stopping point will be described; however, the methodology can be extended to the case of more than one stopping point. A convergence point is a time during the test when all the operational mission profile tasks meet their expected averages or fall within an acceptable range. At least three convergence points are required for a well-balanced test. The end of the test, time [math]\displaystyle{ T\,\! }[/math], must be a convergence point. The test times between the convergence points do not have to be the same.

The objective of having the convergence points is to be able to apply the Crow extended model directly in such a way that the projection and other key reliability growth parameters can be estimated in a valid fashion. To do this, the grouped data methodology is applied. Note that the methodology can also be used with the Crow-AMSAA (NHPP) model for a simpler analysis without the ability to estimate projected and growth potential reliability. See the Grouped Data for the Crow-AMSAA (NHPP) model or for the Crow extended model.

Example

Template loop detected: Mission Profile Example

Examples heading.png

More mission profile examples are available! See also:

Examples link.png Mission Profile Testing


Examples heading.png

More mission profile examples are available! See also:

Examples link.png Mission Profile Testing