eXpress provides the user with an advanced approach to development diagnostics to meet the requirements of diagnostic engineering. eXpress allows for:
Assessing the Testability of your system means more than simply stating Detection and Isolation statistics. It is also about the verification of the design concept itself. Through the most extensive Testability reports in the industry, eXpress can provide the data that supports nearly any analysis.
DSI unique integration of diagnostics, FMECA, FTA and simulation provide a comprehensive look at how different diagnostic strategies and methods impact system safety, availability and cost of ownership.
The “Fault Group Mitigation” report provides a clear method of analyzing fault groups by displaying the fault groups in descending order based upon their impact upon testability, system safety, cost to repair and time to repair.
“False Alarms” need to be more succinctly identified as a failure effect at the level of analysis (or lowest level of replacement) and then mapped to the fully-fielded Integrated System Level as a precursor to providing more reliable detection, isolation and corrective action. While describing an failure effect to be alarm-generating as “intended to be” detected by the sensor(s) within a subsystem is a step toward having the alarm be equally discernible at the System Level – that is, in and around any diagnostic or hardware design interference that may occur at the time of the alarm.
So, precision on the algorithm to allow for a non-linear failure progression is nice, but that focus point still dismisses the larger issues of concern for the fielded system.
In a high-end diagnostic design development process, the algorithms used to represent failure progression or to scratch at estimating RUL can serve to be a bit more useful beyond the laboratory and the controlled environment. That is, if one is truly concerned about the effects at the system level, over time, considering the impact of maintenance on system failure characteristics and the considering design changes during a fielded system lifecycle.
When we consider such factors along with our failure algorithm computation, we will soon be introduced to such new areas as “diagnostic false alarms”, that is, which of the false alarms are the result of insufficient or erroneous diagnostics or sensing mechanisms (which often carry a higher failure rate than that which their sensing).
Next, we may wish to consider the ability to discern “True Alarms” from “False Alarms” – and discover that rate of occurrences over time, given a maintained fielded system. In order to understand this distinction, we must then be able to discern if our fault detection and fault isolation capability can “uniquely isolate” to the failure(s) within the fully-fielded integrated complex system(s). This is very simple for eXpress discover in the design through a powerful utility called, “FMECA Plus” (http://www.dsiintl.com/Products/fmeca.aspx). As the eXpress diagnostic capability is leveraged along with the design’s FMECA data, the FMECA Plus output presents the root cause of the failure in terms of whether the Failure Effect or lost function is grouped with any other functions as based upon the constraints of the detection or by diagnostic method employed (sensor, BIT, ATE, etc.). If the diagnostic design is such that the root cause of the failure is able to be indicted “uniquely” from any other lost function(s), then this Failure Effect is deemed to be “uniquely isolated”.
Contrarily, if the lost function is only detected in ambiguity with any other lost function(s), then, due to these constraints of the design’s diagnostic capability, the failure is NOT able to be “uniquely isolated” from any other possible failures at that level of test. The results of diagnostics burdened by diagnostic impurities such as this inability to “uniquely isolate” failures can have dramatic impact on the lifecycle sustainment costs in virtually every diagnostic category: Ownership Cost, Safety, Availability and Operational or Mission Success. The diagnostic impact can be simulated over any desired sustainment lifecycle and such impact in these areas can be presented in many graphs. These results are able to be easily observed from the companion Operational Simulation and Health Management tool, “STAGE”
As we are presented with an open canvas of new diagnostic assessment possibilities, we will learn to obtain the fluent use of the terms, “True Mission Aborts” and “False Mission Aborts” as the use of these terms is dependent upon the background one has obtained in truly understanding the impact of failure effects at the system level, over time, considering a dynamic and maintained system – also known as reality.
Yet, this is just the very, very beginning of so many data analytics that can present the impact of being able to leverage the diagnostic integrity of the design with the other Reliability and Maintenance Engineering design disciplines.
Discovering Powerful Simulation-Based Analytics for the Product Sustainment Lifecycle
What we have learned by augmenting the RCM (Reliability Engineering) with the Diagnostic Engineering (concept of fault groups) with Maintenance engineering (Replacement/Remediation activities) in a Simulation that combines all of these disciplines is that some maintenance (replacements) can and does prevent other future failures. Some failed components that may fail more frequently and may be detectable can actually increase availability by combining the replacement of a failed component with the replacement on non-failed component(s) in a single fault group.
A simulation that considers maintenance practices based upon the constraints of isolation capability at the system level, will exploit the benefits as well as the costs of presumed to be negative characteristics as “false removals” vs. “true removals” or “cost of replacement” vs. “extra cost(s) of replacement(s)”, etc.
Of course the Risk Priority Number (RPN) analyses charts are generated from the design as a turnkey output, just like a host of other turnkey chart outputs that even allow anyone to obviously discern if any function (and/or failure effect) is observed and “uniquely isolated” at the System level regardless of any prior maintenance activities or design modifications incurred along the way. So, if we wish to use an RPN Chart since we are familiar with their importance, then we can do that as well. Other programs may require a host of additional diagnostic (sustainment) purposes that can be pulled from this diagnostic design database contained at managed within the System model.
Any data that can be submitted at any point to characterize the systems’ design performance or support characteristics, should be fully captured in a form to allow it to be fully extensible in any other diagnostic sustainment activities going forward – for this system – any variant to this system – and to allow for a major head start on any future system that can take advantage of any existing subsystem design contained therein.
Any failure modes at one level can become the failure effects at the next level of design hierarchy within a system. We can allow a high-end advanced diagnostic modeling tool to associate and integrate the failure effects and functions of any of the subsystems within the system architecture. Then, we can use this identical database in an Operational Support Simulation to observe how any of the failures will be detected (if the system design contains that capability) and how any maintenance actions will impact any future sustainment activities from an Operational and cost standpoint.
Finally, if we desire to know what the impact of using additional sensors or prognostics as a part of our support plan, we can simply assess the effectiveness of any prognostics enrichments alternatives in advance to discover just how capable our fielded system design can exploit any prognostic enhancements considerations. And of course, simulate the impact of many scenarios including scheduled maintenance vs. conditioned maintenance, or any mixture to best balance any relevant support requirements, priorities or costs.
The probability of a Failure Mode (FM), itself, may impact how some folks select PM tasks, but that is a simplification that dilutes the relevance of System FM “priority ranking” criteria. Certainly, folks may talk to this, but this is a very “loosely” worked endeavor without a method to truly map the FM from any subsystem or level within the design, to the fielded integrated system level.
Once one is able to accomplish this task, and via using the exact diagnostic capability to be capture the functional and FM characteristics within the design, then the knowledge can be used and reused to share and leverage value throughout the fielded system’s life cycle – in development and sustainment.
Many times the probability of a particular FM may be high, which may not necessarily be considered “unacceptable”. An example may be to have a high FM contained within the same Fault Group (FG) that may contain a critical FM on another function on the same or different component, or to have a more critical or non-detectable FM grouped together with the component with the higher FM in the same FG.
The inability for the fielded system to discern between FM within a FG or even on a single replaceable component, can also cause the system to take the most conservative corrective action based upon the system sensing the FM — here’s how: The component(s) may have a function that has a lower criticality and severity that is grouped with any other component or function on the same component that has a higher criticality and severity. If the FM is observed at the system level, the system will be forced to take the most severe corrective action – If the corrective actions were dissimilar (proceed in degraded mode vs. abort mission), then the most severe corrective action will be taken, regardless if the FM originated from the function with the lower severity & corrective action….
FMECA’s lack the “diagnostic engineering” accountability that would enable FM to be mapped to the FG and constituency of the FG. Once the constituency of the FG is identified, the system design can be more transparently mapped to the system level support philosophy and sustainment requirements. At that time, and if the design is captured elegantly enough so that it can accommodate a myriad of design alternatives that enable the evaluation of design alternatives, then we should be able to select the components along with their corresponding attributes (including FR and FM, etc.). Once we attain this insight and capability, new fielded system design characteristics can be derived directly from the simulating of the system design over a lifecycle that considers reconfiguration due to maintenance.
The simulation of the design will show just how important the important of a FM may impact on the way PM tasks are selected or performed.
We should demand to be able to evaluate the selection of PM tasks as impacted by knowing “True Mission Aborts vs. False Mission Aborts”, “True Diagnostic Alarms vs. False Diagnostic Alarms”, “True vs. False Removals”, etc. and have the associated costing simulations be charted over the lifecycle as well. But regardless, the alternative designs must be easily represented, tracked and captured to be simulated, but also to be “export” ready for the diagnostics to be fully realized within any health management and/or troubleshooting paradigm that shares and leverages all diagnostic/reliability/and maintenance engineering activities.
Extended FMECA (FMECA Plus): http://www.dsiintl.com/Products/fmeca.aspx
Simulation of design:http://www.dsiintl.com/Resources/Brochures/StageBrochurev4.pdf
Share and Leverage for test: http://m.youtube.com/#/user/DSIInternational