inner banner

A Design Environment for Legacy Test Strategies

by DSI Staff | Published 07/27/2004 | Concepts

The Design Environment

Legacy strategies are often updated as a result of technology improvements (including Design changes, ATE changes, new BIT capabilities, etc). They are also sometimes updated as a result of their failure to meet expectations or handle conditions that might have only become known once fielded. In such cases, one quickly discovers how effectively the original design environment captured information, and whether the potential for re-using the original data still exists.

The most common discovery, unfortunately, is that the engineering data is not as useful as it once was, and that a fair amount of re-engineering is necessary. Significant contributors to this problem include:

  • Too little information was represented
  • Data representation is too abstract for analysis
  • Information or assumptions were poorly documented
  • Data is no longer accessible to current tools (data obsolesence)

When the captured information reduces large amounts of knowledge to a few key conclusions, documentation becomes essential for any future attempt to understand the logic behind the conclusion. Often, this reduction is not only poorly documented, but represents such a reduction in knowledge as to obscure how the knowledge was derived. This is also commonly known as “brain-drain,” where the knowledge that wasn’t documented is lost along with the engineers who originally developed it.

The other problem that can be difficult to avoid, and is why new standards like XML were developed, is that of poor interoperability. Often, the tools that continue to hold the information do not support the import / export capabilities useful to other tools currently being used. While XML promises to solve much of this problem, most users have a hard time identifying to what extent the XML representation is available or suitable.

Overcoming these problems requires emphasizing several aspects of the problem. First, fully assessing each tool’s interoperability, in the context of the problem at hand, helps avoid data obsolescence. Second, ensuring good documentation practices helps avoid the “brain-drain” loss of knowledge. Lastly, creating multiple representations of data helps overcome the limitations of any particular representation. For example, supplementing a diagnostic strategy with a FMECA report helps provide another view of how tests detect certain failures. Of course, a FMECA alone is not useful as a complete definition of testing capability.

Test Strategy Validation

With a design environment in place, the next issue becomes that of deploying a solution. The fielded strategy is, after all, the final product. The quality of the strategy is paramount, since logic errors or other shortcomings can cause enormous impacts on maintenance costs. As such, quality control over the strategy is a key concern that is typically addressed through fault injection. Test strategy fault injection can occur by using any of the following sources of information:

  • Conditions of Test Success and Failure
  • Actual Hardware / Testing Resources
  • Math Models

Because using either the actual hardware or a math model prohibits validation during early phases, knowing the conditions that provide a pass or fail response to each test is the only way to validate tests at a point in the process where design decisions can still be influenced. Test conclusion information can be represented or obtained in the following ways:

  • Manual
  • Spreadsheets
  • Diagnostic Models

Manual response to tests is clearly unacceptable because it is not repeatable, nor is the information captured for future engineers. Using spreadsheets is at least electronic, but is in a poor form. Spreadsheets invariably become unwieldly and difficult to manipulate for non-trival designs, partly due to their non-hierarchical approach. This leaves us with Diagnostic Models as the richest source of information from which to drive test strategy validation.

In addition to immediate benefits, the coupling of a diagnostic model with test strategy validation provides many long-term benefits. Furthermore, as new or legacy information is brought into such an environment, there are many additional validation benefits that occur as part of the process instead of remaining hidden until later test strategy validation. The benefits of the modeling environment include:

  • Identification of Hidden Failure Sources
  • Analysis of Existing Test Coverage
  • Identification of Existing Logic Errors

Since the model allows a faster turn-around, the validation of the test strategy can be done more often and to a higher resolution. In fact, since the design provides early validation of the test strategy, it helps remove any ambiguity between the hardware and the test strategy. The following benefits can be realized through use of the design environment:

  • Maintain single source of Strategy Logic
  • Document Assumptions for Future Engineers
  • Reduce Cost of Future Design Changes
  • Reverification of Baseline Strategy Performance

Test Engineering Environment

Finally, we come to the Test Engineering Environment. Often overlooked is the degree to which test code and test strategy become one entity. The Test Strategy, established logically through the design environment, must have real test code that performs the real thinking. The role of a test engineering tool is to provide the following benefits:

  • Increasing Test Development Productivity
  • Reduction of Lifecycle Costs
  • Support for Multiple Maintenance Levels
  • Avoiding obsolescence
  • Re-use of Test Code

Again, the Test engineering environment helps overcome problems such as equipment and code obsolesence, changes to the hardware environments, etc. Although, the test environment can be used to overcome testing changes, it does not address design changes, nor does it maintain any traceability to the design. When used alone, the test engineering environment can actually contribute to loss of information because of its lack of traceability to the design.

By selecting design and test development environments in response to deployment requirements, many problems can be avoided that would otherwise only first be realized when legacy test strategies are updated. It’s a classic problem of data re-use, marked by a failure to understand the underlying causes. Rather than repeating history, finding a solution to the underlying engineering failure can pay off long into the future.

Subscribe To Our Newsletter