Shown to the left is a picture as Ralph demonstrates his new Diagnostic Modeling Approach, prior to his founding of DSI, based on his innovative application of Dependency Modeling to the world of Diagnostics.
As one might say, “the rest is history…”
A Short History of Diagnostic Modeling Part I: The Early Years
In the beginning, there was the dependency model-a way of representing the causal relationships between events and the various agents that enable those events to occur. Dependency modeling was initially developed by Ralph A. De Paul, Jr. in the 1950s as a method of developing more “responsible” diagnostics after several of De Paul’s friends were killed in the Korean War when their equipment malfunctioned in ways that had not been anticipated by diagnostic developers. Logic Modeling (as De Paul began calling the functional dependency modeling process in the 1960s) allowed all functions of a device or system under test to be mapped to the events, testable or not, that depend upon the proper operation of those functions.
In 1974, De Paul’s dependency modeling concept was incorporated into MIL-M-24100B, a military manual that documented the construction of the maintenance dependency chart (the central task of the Logic Modeling process) as part of the development of Functionally Oriented Maintenance Manuals (FOMM), Also in the 1970s, De Paul founded DETEX Systems, Inc. (now DSI International) and offered his Logic Modeling process-newly hosted as the computer program LOGMOD-as a commercial product for the DoD diagnostics community. After a string of LOGMOD successes in the 1970s and 1980s, other companies began to offer their own dependency model-based approaches to diagnostic development and assessment, often tweaking the dependency modeling process to serve their own particular bias or niche. As more members of the diagnostic community jumped on the dependency modeling bandwagon, there was soon a general divergence in modeling approaches. One camp-whose proponents envisioned dependency modeling as the missing link between FMECA/Reliability Block Diagrams and run-time diagnostics-adapted De Paul’s original dependency modeling technique to map specific failure modes (rather than functions) to tests. Rather than model a system or device the way that it is supposed to operate, this approach advocated modeling the system or device as it is expected to fail. Although this “twist” on dependency modeling may have been more enticing to run-time diagnostic developers, it suffered from two major disadvantages:
1.incompleteness-the effectiveness of the resulting diagnostics was restricted by the modeler’s ability (or inability) to enumerate all of the ways in which the unit under test was capable of failing, and
2.limited usefulness-because failure-oriented dependency models require that the specific failure modes of a design or system be identified, models can’t be developed until implementation details become available (relatively late in the design process), thereby limiting their applicability as a engineering tool during product development.
Meanwhile, De Paul and others continued preaching (and demonstrating) the value of functional dependency modeling. Although, at the time, functional diagnostics were not always an easy sell” to failure-oriented diagnostic developers, De Paul’s functional approach to diagnostic modeling nevertheless promised that rigorous analysis would result in 100% reliable diagnostics (a guarantee that could not be matched by modeling approaches that focused on failure modes), thereby remaining true to De Paul’s original vision of diagnostic “responsibility.’ In the late 1970s De Paul began offering LOGMOD not only as a tool for the development of fault isolation logic, but also as an aide for testability and maintainability design-a move that would greatly influence the direction of both his own company (DSI) and the diagnostic modeling community in general. Recognizing that the functional approach of his Logic Modeling process was uniquely suitable for providing feedback about a design’s diagnostic capability during early stages of the development cycle (when implementation specifics are not yet available), De Paul pioneered the field of model-based testability analysis, adding figures of merit to the LOGMOD analysis reports that would later become standard measures of design testability (in fact, in the early 1980s, De Paul worked closely with the writers of MIL-STD-2165–the first military standard on testability analysis).
A Short History of Diagnostic Modeling Part II: The Second Generation
In the mid 1980s, DSI was a member of the team that developed the Weapon System Testability Analyzer (WSTA) portion of the U.S. Navy’s Integrated Diagnostic Support System (IDSS). Although De Paul’s company was instrumental during the conceptual planning and pitching of WSTA to the Navy, DSI was only allotted a relatively small portion of the actual implementation of the tool. Because the resulting program was such a far cry from what it could have been (that is, from DSI’s original vision of the tool), DSI personnel to this day remember the WSTA years only with a certain amount of embarrassment. Sometimes, however, one must take a step backward before they can progress forward. In this case, De Paul’s dissatisfaction with the results of the WSTA project led directly to the development of STAT-DSI’s second-generation diagnostic engineering tool.
STAT began its life as a more robust, full-featured implementation of the Logic Modeling process. Whereas LOGMOD had only run on a limited number of older platforms, STAT was designed to run on a wide variety of systems, including not only those with established operating systems such as DOS, UNIX and the Macintosh OS, but also on systems that ran Microsoft’s new Windows software. The STAT user interface, which was more sophisticated than those in either the LOGMOD or WSTA programs, allowed traditional dependency models to be developed, reviewed, and modified with unprecedented ease.
Unlike LOGMOD (which had begun as a tool for developing “responsible” diagnostic strategies and then evolved into the first tool to offer model-based analysis of design testability) and WSTA (which was designed to be a testability analyser and thus made no attempt to develop effective diagnostic strategies), STAT took both roles seriously from the start. As a diagnostic development tool, STAT offered several advances over previous dependency model based tools, including diagnostic optimization based upon topological half-split (with or without taking failure rates into consideration), strategy customization using a linear combination of up to thirteen normalised weighting factors, the ability to specify different test selection criteria for fault detection and fault isolation, manual manipulation of the fault detection order, and refinement techniques to improve multiple-fault isolation. As an analysis tool, STAT supplemented the standard testability FD/Fl metrics with other valuable statistics, such as test point placement rankings, isolation effectiveness, false removal rate, and numerous assessments of the effects of diagnostic ambiguity upon maintenance cost/time.
Although STAT was designed as a stand-alone tool, DSI quickly recognized the value of incorporating STAT into an overall diagnostic development process. Import tools were developed so that STAT models could be automatically derived from engineering (CAD/CAE) databases. Also, in the late 1980s, DSI developed the ability to export diagnostics from a STAT model to the TYX Corporation’s Personal Atlas Work Station (PAWS)-a leading tool for developing the code that is run on Automatic Test Equipment (ATE), as well as all related documentation, including Test Requirements Documents (TRDs). The STAT-PAWS Integrated Functional Interface (SPIFI) was the first of many joint endeavors between the two companies, who often worked together to offer complete diagnostic engineering solutions.
Throughout the discipline of diagnostic engineering, STAT became synonymous with System Testability Analysis. Nevertheless, DSI was not the only player in the diagnostic modeling market. In addition to the Navy’s WSTA tool, there existed several other failure-based products that, due to the limitations mentioned earlier, were mostly useful for analyzing either existing or late-development designs. By the early 1990s, the debate between functional and failure mode modeling had changed somewhat, with advocates of each approach assuring potential users that their method was not only the correct one, but also encompassed the other approach. Functions were represented as absences of failure modes and failure modes modeled as negated functions, leading to the coining of strange oxymoronic terms such as “functional failure modes” and “failure mode functional modeling.” Some STAT users combined the two approaches, initially developing a STAT model based on functions and later converting the model into a failure mode model for diagnostic development.
The difference between the functional and failure mode diagnostic biases boiled down to a bracketing of responsibility. The tests required by failure-oriented diagnostics were relatively easy to implement, but the diagnostics themselves were prone to incompleteness; yet, because diagnostic and test developers were not held responsible for diagnosing failures outside of their well-defined fault universe, rarely were any diagnostic shortcomings uncovered during design time. Functional diagnostics, on the hand, were complete and consistent, but placed a greater burden upon test developers to fully test functionality, rather than simply look for expected errors. Although a “harder sell” to test developers, the functional approach remained faithful to De Paul’s original vision of diagnostic “responsibility.”
In the near future:
Please visit this page again as it will elaborate on the eXpress genre (mid 1990’s) and a tour through some of DSI’s most powerful and influential innovations in the diagnostics and prognostics worlds.