While data interoperability between heterogeneous reliability assessment tools and products remains a challenge from and between subsystem designs, the goals of Testability at the “integrated system” or fielded product level (Availability, Cost of Ownership, Mission/Operational Success and Safety) remain unchanged:
1. Increased Availability
2. Reduced Cost of Ownership
3. Mission or Operational Success (Reliability)
4. Improved Safety
These four (4) goals may not be specifically identified in a Design or Sustainment Requirements document, but nonetheless, the “trading” or “balancing” of these objectives would be sought by the end-user or customer. The product manufacturer must consider to what extent it can develop a product that can meet or exceed such needs of the customer without increasing product development or sustainment costs.
The performance of the RAMS (Reliability, Availability, Maintainability, and Safety) activities are too often worked independently within their respective design disciplines. Each design discipline typically has it own specific design assessment product delivery requirements. As such this creates another challenge in the effective balancing of the four (4) goals of Testability during the Design Development process, which is a precursor for any effective PLM since the balancing of RAMS activities will constrain the sustainment effectiveness. This challenge can only be undertaken during he influencing the design within the broader scope of System Testability and Integrated Systems Diagnostic Design.
Too often, the manufacturer (“Systems’ Integrator”, “Prime Contractor” or subcontractors) will attempt to minimize any development effort that isn’t scoped or referenced in their contract or Design Requirements document. In this manner, the design development parties inherently vie for this “trimming” of scope fringe costs, rather that investigate the viability of larger scale enrichment and cost avoidance to their design development processes. Else, empower the design development parties to add funding to their SOW (Statement of Work) for new capabilities that can be created and delivered to their customer(s) as additional “turn-key” output design assessment products or diagnostic sequencing outputs to the publications department or even directly to the production lab or the deployed environment(s).
As we move through the Design Development Lifecycle, which will ultimately establish the constraints of PLM during Sustainment, we have new opportunities to improve the design assessment products as they become truly “integrated” with the other design assessment data from contributing design disciplines. Often, discoveries of incomplete information and “out-of-date” data combined with current data. So, this discovery offers another benefit by improving “design data fitness”, which is an immediate development benefit and often reduces belated and costly discoveries later in the lifecycle.
But as this Design Development process is inherently “tighten-up” in the ISDD Process, you’ll begin to realize a new and powerful side-effect – that is, specifically, the zero-cost, inadvertent improving of the sustainment effectiveness!
The attempt to universally minimize development costs, may too often be a COST DRIVER!
The truth is that many design engineers haven’t investigated productive approaches to leveraging their work products and their program managers typically haven’t afforded the time to truly discuss new and versatile approaches and tools to increase the value of their efforts. We see this all of the time – and we hear the complaints directly. So, we actually do “ACTIVE LISTENING” to engineers and managers alike.
As a World Class Solutions Provider, we need to facilitate the solutions to these challenges while reducing costs – in both Design Development and the Sustainment Lifecycle(s). This can only be accomplished if we can greatly enrich the opportunities to work together without increasing the workload or the costs. ISDD accomplishes this objective seamlessly and by simply finding a way to use the output from all of these design disciplines in a manner that bolsters the utility and accuracy of their work AND the work of other design disciplines.
The ultimate objective is to then ensure that the investment into Design Development enriches the sustainment effectiveness. As systems continue to increase in both size and complexity, the ability to drill down and find the root causes to failures continues to be a growing challenge. Many of the traditional methods to compute low level reliability or maintainability statistics in complex designs and bring this data to the system level to address system requirements conformance is becoming increasingly costly and challenging.
While contributing lower level designs may be developed to satisfy their own specific reliability and maintainability requirements, the complex integrated system should also be able observe and manage failures within these integrated systems expeditiously, effectively, affordably and safely. This discussion exploits areas within traditional design development processes that systemically contribute to the growing sustainment challenges resulting from pursuing independent and often competing, interdisciplinary assessment objectives.
The extensive corroborative nature of the Integrated Systems Diagnostics Design, or “ISDD” embraces the interdisciplinary “trade space” by uncovering untapped ROI from the balancing and reusing design assessment products and data artifacts more inclusively AND seamlessly within the implemented maintenance solution.
After we represent each design discipline as is traditionally contained within its own circle in this Venn diagram, we will describe the general objectives of each discipline and how their independent assessments products are characterized to service the sustainment goals of fielded system.
Let’s begin by examining the circle that represents the Reliability design discipline. For large complex Integrated Systems, reliability engineering is essential analysis process that must be worked concurrently with other design disciplines and should be started early in the product development phase. It can be performed very extensively and consistently or it can be performed to reflect the individual talents and preferences of the individual engineer.
For complex systems, it is performed to meet system requirements, which typically involve many tools and techniques that often vary from organization to organization. While interoperability between heterogeneous reliability assessment tools and products remains a challenge from and between subsystem designs, the goals at the “integrated system” or fielded product level (Availability, Cost of Ownership, Mission/Operational Success and Safety) remain unchanged.
Fundamentally, reliability engineering contributes by performing analyses that will best predict the expected reliable serviceable life for each component used in the system. The discipline must also account for how, when, and the critical nature of any of the failures for any of system components, and what is the result or impact from the failure throughout the integrated system. While this is an over-simplification, this is the basis for computing many reliability products such as component or design failure rates, failure modes, failure effects, mean time between failures, criticality or severity of failure as well as a number of higher level Reliability-based assessment products including the FMECA, RPN and the Fault Tree Analysis (FTA).
Maintenance Engineering contributes the information on how the design is to be maintained when any failure occurs or before a failure is expected to occur to ensure that the product or system is able to perform or function properly when it becomes needed for operation.
Additionally, Maintenance Engineering must provide the knowledge on the expected duration of a corrective maintenance action or repair activity, which may include the time expected to isolate, remove, procure and replace the failed item. While this is an oversimplification, this is the basis for computing such products as mean time to isolate, mean time to repair, mean logistics delay time and Operational Availability, to state a few of the more recognizable metrics.
The discipline also may assess tools and methods to minimize similar future failures by tracking, reporting, (FRACAS) and learning from corrective actions performed to maintain fielded systems, or by making those corrections during the run-time operations that simply manage to mitigate failures the failure(s) as they occur (ISHM) or pending imminent failure occurrence (PHM) based upon an observable failing condition (CBM).
Diagnostics Engineering is a process that is best when worked at the very earliest stages of design development in order to influence the design for sustainability. This is a tremendously valuable opportunity that plays a significant impact on the effectiveness of any selected sustainment approach including Design For Testability (DFT), On-Board Health Management or any Guided Troubleshooting paradigm(s) or evolving maintenance philosophy.
When that opportunity isn’t currently available, diagnostic engineering can still greatly enrich the ongoing diagnostic capability of complex legacy systems by first, establishing the design’s diagnostic capability (FD/FI) baseline. This can immediately ensure that the design is much more embracing of future design modifications requiring integration with evolving and new technological advancements in testing or sustainment paradigms. Eventually, the captured design will allow the diagnostic assessment data to be reused on related future designs or repurposed to for a wide variety of future cost leveraging opportunities.
Diagnostic Engineering is a discipline that provides proactive sustainment options for future designs paying dividends throughout the lifecycle of the fielded product or system. It has tremendous impact on the utility of the data products produced from the investment into the Reliability and Maintenance Engineering efforts.
This discipline determines if failures can be detected or isolated and how to optimally devise an approach to observe or discover those failures to ensure the correct failure is indicted and remedied in a timely fashion. It offers unmatched insight to the validation of BIT effectiveness and any Health Management or PHM design integrity at the Integrated Systems level.
While legacy designs can be greatly enriched at any time by capturing the design knowledge in a form that can be leveraged for continued system lifecycle diagnostic benefit, captured design knowledge can also provide a jump start in concurrent variant designs and also new designs.
Ideally, the ISDD activity needs to be performed iteratively within the development process so it can be used to assess and influence the design from a diagnostic perspective ensuring that the goals it shares with Reliability and Maintainability Engineering are balanced and maximized continuously throughout the life-cycle of the fielded system.
The early involvement of diagnostic engineering will ensure the design is influenced for sustainment approaches because the assessments from all of these design discipline are no longer “marooned” to solely address requirements – instead these assessments products work to form a “balanced” knowledgebase “asset” that is able to be carried forward directly into being an active role-player in the implementation of the sustainment solution.
The timely performance of Diagnostics Engineering will provide Improved Fault Detection, Reduced False Removals / RTOK / CND, Reduced False Alarms, Reduced Systems/Mission Aborts, Lower Maintenance Costs, Effective Isolation to optimum repair level, Operational Availability and the ability to uniquely isolate Critical Failures – that is to proactively discover where tests points or sensors are unable discern between the root causes of any low-level component failure due to inherent Integrated System design or Health Management constraints.
When ISDD is performed within a corroborative design environment, systems, reliability, maintenance and diagnostics engineering activities engage in data sharing and leveraging in a synchronized manner, using identical and shared data artifacts – repeatedly cross-checked for integrated design “fitness” in terms of consistencies, completeness and omissions, while being performed much earlier within the design development process.
Metrics owned by each independent discipline, have traditionally served to score or assess how each discipline is able to meet its independent disciplinary requirements. In this traditional manner, it’s difficult to realize how these independently-owned requirements often serve cross-purposes in their competing against those metrics and requirements serviced by the companion disciplines.
For example, if the objective is to reduce false alarms – we may increase mission success, but it may then increase false removals. When the design was not optimized to allow for optimized fault detection, which may thereby increase the cost of ownership in terms of requiring excessive spares for non-failed components – that is, components with a substantial remaining useful life. Since diagnostic engineering provides the knowledge of fault group constituency we would know when we reached the constraints of diagnostic capability and must remove a fault group, or set of suspected failed items in order to remediate. However, if we tried to minimize investment in diagnostic engineering, our replacement may be incorrect and not fix the failures, or even ‘mask’ the non-detectable failures and thereby increase loss of system availability – many of these valuable assessments are not possible without the inclusion of high-end diagnostic engineering.
Another strength of ISDD is that it can fully accommodate Reliability and Safety Analyses that have been independently developed by partnering organizations or third party subsystem design teams. This alone, removes a layer of risk to the thoroughness of the Reliability and Safety Analyses systemic with any traditional approach.
ISDD will take advantage of working the way you work – or, the way it seems to work out. Since the ISDD paradigm is able to include and cross-validate diagnostic design data retrieved from independently-developed FMECA and FTA analyses, many inconsistencies are trapped and discovered at a less-costly point in design development.
What typically occurs is that the partnering design teams or organizations use an assortment of home-grown “data-driven” techniques to assist in computing subsystem and end product safety (or any other system-based) prediction(s) based upon available talent, resources, documentation, project requirements and design processes and constraints may often be an undeterminable endeavor.
When the integration of the data from these contributing designs occurs, most of the inconsistencies or errors are not obvious and are easily overlooked, even in during design validation. This is because no other approach other than ISDD, relies on the FTA and FMECA assessment products to integrate with any or all designs within the full integrated system.
Once the design is fielded and maintained, the failure characteristics of the design are forever changed. This is why the sustainment and its related maintenance philosophy needs to be considered in terms of the constraints of the diagnostic integrity of the Integrated Systems’ design (fielded product). Traditional design assessments don’t translate effectively to the real world dynamics of the sustainment paradigm and, therefore, must enter a design assessment rework cycle once again, and for each design iteration or update. This is a costly endeavor that is largely avoidable – provided the diagnostic interdependencies are captured and preserved in eXpress where any design updates are easily considered, assessed and can be optimized as desired prior to being seamlessly transferred to the evolving sustainment paradigm.
As is evident with the use of ISDD, any change to the design will have an impact on the entire sustainment capability in terms of (safety, availability, operational success, cost of ownership) predictions calculations or simulations. Fortunately, ISDD uniquely enables the convenience of being totally prepared for such inevitable circumstances. ISDD permits the merging of design or technology updates to be, essentially, seamlessly consumed within the exact same (shared) Integrated Systems Design “knowledgebase”. As such, and since the sustainment and the related maintenance philosophy will ultimately depend upon the knowledge of the diagnostic constraints of the fielded system, ISDD is able transform the new design’s diagnostic uncertainty and any costly design assessment rework cycles into trivial tasks while retaining and extending the captured expert diagnostic design knowledgebase for any future challenges.
With ISDD, Design Assessments consider the impact of maintenance upon: