Advanced achievements in semiconductor technology over the past decade have had a major impact on chip design strategies. Although an increase in the density of transistors on the chip has led to a decrease in the cost of one gate, the cost of testing the chips has become a significant part of the total cost of the single chip. In order to keep the costs of testing the chip within reasonable limits, Design For Test (DFT) methods have been proposed that convert a serial circuit into a combinational one for testing purposes. Also, to ensure the process of testing VLSI chips, CAD tools were developed for automatic creation of test sequences (Automatic Test Pattern Generation (ATPG)) and fault modeling. However, even for combinational circuits, the computational resources required both for automatic generation of test tables and for fault modeling become enormous if we take into account circuits with tens of thousands of valves.
The problem of fault modeling in combinational circuits has been discussed in a number of papers over the past few years. Effective methods of fault modeling based on various error models have been developed. Two levels of parallelization were considered in the papers – at the level of sequences, and at the level of errors.
The modern development of computing hardware forms a new paradigm for existing parallel algorithms. In particular, graphics processors (GPUs) having a SIMD architecture with a single instruction and a lot of data can have thousands of threads running simultaneously. This opens new prospects for more effective implementation of procedures in the field of verification and simulation of malfunctions. However, a simple transfer of existing algorithms to new hardware platforms is impossible – it requires the development of substantially new approaches and the new software tools. The use of the proposed methods and approaches will allow achieving acceleration of individual procedures up to several dozen times compared to traditional single-threaded implementations.