The definition of a False Alarm, unfortunately is dependent upon your specific perspective. In a very general sense, it is the improper reporting of a failure to the operator of the equipment or system. In addressing the universe of possibilities that could compromise the proper reporting of a failure, DSI has conquered one specific cause and the primary contributor to the experience of False Alarms, which is the “Diagnostic-Induced” False Alarms, or more simply “Diagnostic False Alarms”.
The text book example of a “Diagnostic “False Alarm is, for example, when the diagnostic equipment misreports the functional status (test results) of whatever the sensor is designed to be detecting in its Test Coverage. In such a case, this would be a case of a faulty sensor. Often this example is used to describe a generic False Alarm, but it is most precisely an example of a “Diagnostic False Alarm”.
“Diagnostic False Alarms” as very easily depicted from the eXpress model when using either the “Critical Failure Diagnosis Chart from the eXpress FMECA Plus module, or from “STAGE” (DSI’s Health Management and Operational Support Simulation application). Traditional military CDRL’s (Contract Data Requirements List) have been unable to distinguish between the Diagnostic Contribution from the overly-generic and ambiguous design data metric it has used to describe the likelihood of the occurrence of False Alarms on a fielded system. As a result of this FA metric being so misunderstood, many contractors are able to play along with its own independent set of rules for generation of what could service as the metric algorithm for that program, or simply exclude the FA by providing a variance with a justification or explanation. Regardless, this is typical of how this ambiguous FA has “not seriously worked” traditionally in the military industry.
As a solution forward, the calculation of the diagnostic contribution to the False Alarms that would be expected to occur over a chosen sustainment lifecycle, is a metric that is fully supportable with STAGE and using the eXpress expert Diagnostic Design Knowledgebase. In this manner, the “diagnostic-induced” False Alarms are observable in the simulation of the diagnostic design in STAGE. Only in the operation of the system would a False Alarm be generated and to that point, the diagnostic constraints (BIT Test Coverage – Test Coverage Interference) per the operational mode are two important variables in the calculation of the Diagnostic False Alarms. Other contributing criteria are the expected failure rates (reliability) of the components that are interrelated in the operational mode of the system, and the impact of prior (predictive and/or corrective) maintenance performed on that system, if any.
But a requirement of a Diagnostic False Alarm Rate based upon the diagnostic integrity of the system, over a predetermined sustainment lifecycle, is a straight forward metric that should be expected to be delivered for complex systems. The delivery should not be a single discreet value, but rather the value should be represented in a chart or graph that minimally associates the probability of diagnostic-induced FA’s to time over a reasonable sustainment lifecycle. Specific detailed data regarding the triggering “root cause(s)” of the failures should also be outlined.
Although a traditional deliverable of at Testability or Reliability assessment product may describe a rosy picture of a design’s diagnostic, reliability or maintainability acumen, the actual experience in the sustainment lifecycle could be “alarmingly” opposite. Example cases have been performed with highly acceptable numbers from assessment products in such areas as FD/FI, FA, FSA, MTBF, MTBUM, etc., but only to discover in a simulation that component grouping and inability to isolate between critical failure modes at lower levels of the design, can tell a much more adverse and daunting story of the fielded system.
When components are being replaced in any maintenance activity, the grouping of the design of components may excel in solving one design objective but may be a major cost driver in sustainment by causing replacements of non-failed components along with “presumed-to-be-failed” components when the Diagnostic Integrity of the design (“net” Test Coverage) is well-defined. In eXpress, the Test Coverage can be exhaustively validated early in design development or anytime in either the design development or sustainment lifecycle(s).
If the failure(s) occur and the diagnostic design does not avail an ability to isolate between multiple functions – one of which may be a benign failure and the other failure to have a “severity” category of a “Catastrophic” Event, then the only corrective action at the system level, is to “abort” the operation or mission. If the (operation, mission or system) abort was later discovered to have been caused by the benign failure, then we have a blatant example of a “False System Abort”, or FSA. These can occur with excessive uncertainty when the diagnostic integrity of the design – at the system level – is not fully known. This again, is greatly avoidable from a diagnostic perspective.
After the system has been fielded and maintained, the opportunity for increased or decreased False Alarms or False System Aborts is undeterminable with traditional approaches to design development for complex systems, despite the expertise of advanced engineers assigned to the development effort.
A true review of FA or FSA – from a diagnostic design capability – can be performed in design development with STAGE. Many other diagnostic performance measures can be retrieved from the same simulation that may provide more detailed support to the heaviest contributors to costly or troublesome characteristics that are not otherwise apparent in traditional design assessment products.
Please scan other related “Solutions” in the suggested “Read More” links for continued explanation: