Vous êtes sur la page 1sur 8

Test and Debug in Deep-Submicron Technologies

Ramyanshu Datta, Antony Sebastine, Ravi Gupta, Whitney J. Townsend, and Jacob A. Abraham Computer Engineering Research Center The University of Texas at Austin rdatta,antonys,ravi,whitney,jaa @cerc.utexas.edu IBM Technical Contact : Dr. Robert Montoye

Abstract
With the scaling of feature sizes into Deep-Submicron (DSM) values, the level of integration and performance achievable in VLSI chips increases. A lot of work has been directed to tackle design related issues arising out of scaling, like leakage mitigation etc. However efforts to enhance testability of such designs have not been sufcient. It is not viable to overlook testability issues arising out of these designs because the defect sizes do not scale proportional to the feature sizes. Previously effective fault models like stuck-at appear archaic and are unable to model faults accurately. This necessitates the need for more detailed models which can more explicitly model the behavior of faulty DSM chips. Also there is a signicant increase in delay faults in logical paths of integrated circuits. Delay faults cause the delay of paths in a chip to be larger than expected resulting in the output of a chip to be deviant from the expected behavior, in spite of the chip being functionally correct. Efcient techniques are needed for detecting such defects in rst silicon and eliminating them before the nal versions of the chips are shipped. This requires efcient debug techniques for performance characterization of large complex integrated circuits in deep-submicron and nanometer technologies. In this paper we present an insight into test challenges arising out of deep submicron technologies and effective approaches to tackle the same.

1. Introduction
Tremendous progress in scaling of process technologies down to the Deep-Submicron (DSM) domain has paved the way for a signicant increase in the level of integration and performance of modern VLSI chips. The integration of complex System on a Chip (SoC) is now a reality. Achieving acceptable reliability levels for modern VLSI chips is a critical issue, making testability a signicant factor that could limit scaling trends if not addressed adequately. This

is especially signicant since defect sizes have not scaled in a manner commensurate with shrinking geometries [1]. Process variations can have a signicant inuence on a chips failure to meet specied performance. Process variations can be broadly classied into two types [2]: process variations within the same fabrication plant and process variations between fabrication plants. The former includes line-to-line, wafer-to-wafer, inter-die (each device on one die shows uniform variations) and intra-die (process variations not uniform over entire die) variations. These variations result in delays of wires and gates within a chip which can be about 30% to 40% [2]. The latter pertains to variations caused by factors like usage of different cell libraries and resulting synthesized circuitry in different foundries, and this may cause variations of 20%-25% between chips fabricated in different foundries. [2]. However fabrication of the same product in different plants is not a common practice and certainly never done for high-performance chips, and hence is not a strong contributor to on-chip process variations. Process variations affect yield, performance and reliability of processors, and need to be considered for delay analysis for .13 micron and lower technologies [3]. Device physics factors [24],[25] like random dopant placement in channel, spatially correlated gate length variation, interconnect thickness variation and processing factors like die location dependence, optical proximity effect, microloading in etching and deposition lead to uncorrelated variations of process parameters within the same die. There has been a lot of work done on design related issues in DSM e.g. circuit families like Limited Switch Dynamic Logic (LSDL) optimized for performance, power, area and leakage [4], but the same is not true for testing. There is an increased need to focus on testability issues related to DSM chips. With increasing system complexity, current test and verication techniques need to be improved. Fault models like stuck-at, which correctly modelled behavior of defective chips earlier, are not effective. Existing techniques like functional simulation, formal verication, and static timing analysis cannot guarantee defect free rst silicon. There can be func-

tional design errors, timing errors and design rule violations [38] which may go undetected before rst silicon. This is because verication is applied only to a model of an Integrated Circuit(IC) and not to actual silicon, and adding more details to this model cause the computational cost of these verication methods to become prohibitive. A structured debug technique is then necessary to locate and rectify any design error in rst silicon in order to reduce time to market. Debug of a chip requires analysis of both its internal and external behavior under known stimuli. Faulty behavior and possible design errors can be located by repeatedly performing this analysis and comparing responses obtained with a set of expected responses. These results can assist in fault localization, and the design cycle is repeated to rectify these defects before new silicon is manufactured. Diminishing feature sizes limit the observability of chips making debugging more difcult especially for timing violations. In order to ensure optimal performance of DSM chips, their functional correctness as well as timing behavior need to be tested. With shrinking feature sizes, delay faults in chips have emerged as a signicant problem. Process parameter variations can result in distributed delay faults in the chip, which cause minor delay faults on multiple gates in a given path to accumulate and result in the path failing to meet performance specications [17]. Process parameter variations can be inter-die i.e. those which inuence the entire chip or functional block such that each device on a chip or a block shows uniform process variations. These are caused by systematic effects like process gradients over the wafer. Process variations which are not uniform over the entire die are called intra-die variations. Timing requirements of modern chips have introduced the need to test for and detect defects of the order of few picoseconds. DC functional tests like scan tests can detect static faults such as stuck at, but cannot detect dynamic faults like delay faults. Also testing for delay defects using Automated Testing Equipment (ATE) for GHz range processors is very expensive and most testers in test facilities still run at a few 100 MHz. These factors necessitate on-chip test infrastructure like Built-In Self Test and Design For Testability as well on-chip timing analysis and delay fault testing circuitry, for test and debug of modern VLSI chips. Also, the use of on-chip testing circuitry allows for at-speed testing which is imperative to ensure accurate timing behavior and performance characterization of chips. In addition to test challenges associated with DC and AC faults, DSM chips also suffer from transient errors like Single Event Upsets (SEUs) caused by cosmic neutrons during normal operation of the chip. It is reported that upto 20 neutrons/sq cm/hr reach the Earths surface with an energy level greater 10MeV. This effect of random errors due to such hits is tremendous in sub 100nm technologies and supply voltages below 2.2V [23]. Ever smaller dimensions brings

CLOCK a) FAULTFREE RISE b) SLOW TO RISE 1 c) SLOW TO RISE 2 d) SLOW TO RISE 3 e) FAULTFREE FALL f) SLOW TO FALL 1 g) SLOW TO FALL 2 h) SLOW TO FALL 3 i) SETUP VIOLATION 1 j) SETUP VIOLATION 2

Figure 1. Delay Fault Types with it an ever increasing probability of transient errors occurring within circuits and these cannot be detected using regular test methods. Detection and debug of such errors has increased the importance of On-Line Testing, i.e. testing chips during their regular operation. Just as memories include redundant circuitry today, it may become necessary to protect datapath circuitry with error detection and even error correction. Therefore this paper includes a method for providing for on-line testing during circuit operation as well as another method which provides for on-line masking of errors. This paper is organized as follows. Section 2 reviews some background on delay faults. Section 3 elaborates on scan chains and Design for Testability (DFT) schemes. Section 4 provides an overview of Built in Self Test (BIST) and presents some BIST schemes for multipliers. Section 5 discusses some existing strategies for on chip delay measurement and characterization. In Section 6 we present on-line testing for error detection and conclude the paper.

2. Delay Faults
Delay faults are a category of faults which cause an otherwise defect free (i.e. logical operation correct) chip to malfunction at a desired clock rate. Delay faults could be due to increased interconnect or path resistance, crosstalk

induced delay, excessive voltage drop on supply nets, substrate and thermal noise, resistive opens and process variations. Process variations have a signicant effect on timing behavior of chips built using 0.13 m and lower technologies [3], [17]. Process variations can be inter-die i.e. those which inuence the entire chip or functional block such that each device on a chip or a block shows uniform process variations. These are caused by systematic effects like process gradients over the wafer. Process variations which are not uniform over the entire die are called intradie variations. Figure 1 shows some ways delay faults manifest themselves. Figure 1a is a fault free rising transition on a path. In 1b the signal transition starts to rise at the same time as the fault free signal, but by time the signal reaches half of its supply voltage (which is assumed to be the threshold value required by a circuit to detect a rise or fall), the assertion edge of the clock has passed, and incorrect value is captured into a latching element clocked by this clock. In 1c the transition itself takes place after the clock assertion edge and causes a wrong value to be latched in. Figure 1d shows a case where the fault causes the signal to be delayed so much that it misses the next assertion edge of the clock too. Figures 1 e-h show the corresponding cases for falling transitions. Figures 1 i and j cause setup time violations of latching elements and cause indeterminate values to be latched in. Each of the transitions in b-d and f-h require different kind of tests [27]. Delay Fault Testing determines the correctness of the circuit at the specied speed. A CUT that is functionally correct at a particular frequency may fail at a higher frequency. Higher propagation delays of signals may cause wrong values to be latched at outputs. Delay fault testing detects such temporal defects in a CUT, and has gained signicance in face of aggressive timing requirements of modern day ICs. However issues pertaining to delay testing are quite different from those that pertain to functional test strategies like stuck-at test. Unlike stuck-at fault testing, delay testing is closely tied to the test application strategy, and knowledge of test application methodology is a prerequisite to test generation [18]. Additionally delay testing requires generation of two patterns (one for initialization and second for the required transition) in order to detect a fault. The fault models used for delay fault testing are also quite different from the conventional fault models like stuck-at. Delay fault models can be primarily categorized into 5 types [19], [13], [21], [33], [12] 1. Transition fault model 2. Gate delay fault model 3. Line delay fault model 4. Path delay fault model 5. Segment delay fault model

SCANOUT mode

COMBINATIONAL LOGIC

COMBINATIONAL LOGIC

SCANIN

Figure 2. Scan Chain The transition delay fault model [19] assumes that delay faults affect only a single gate in the CUT and are classied into slow-to-rise and slow to-fall faults. These faults increase or decrease the nominal delay of the gate. In this fault model the delay is assumed to be ob The gate delay fault model[13] assumes that the delay fault is lumped at one gate in the circuit. However unlike the transition fault model it does not assume that an increase in delay due to the fault will affect the performance irrespective of the propagation path. Instead it assumes that only long paths through the fault model will cause performance degradation. Limitations of this fault model are similar to that of the transition fault model since the assumption that a single gate is affected by a delay fault is overly optimistic.The advantage is that the number of faults is linear with the number of gates in the circuit. Line delay fault model [21] is a variation of the Gate delay fault model in that it tests a rising/falling delay fault on a given line in the circuit, and propagates the fault through the longest sensitizable path in the circuit. This fault model can detect some distributed delay defects on the propagation paths but since only one propagation path through each line is considered it may fail to detect some defects. A path is dened as an ordered set of gates and a path delay fault model is considered faulty if the delay of any of its paths exceeds a specied limit. The delay of the path is the sum of the delays of the gates and interconnections on that path. The path delay fault model [33] is more appropriate to DSM technology designs, since it can model statistical process variations. However the disadvantage of using the path delay fault model is that the number of paths whose delay needs to be measured can become exponential for real size designs. Segment delay fault model is a way around this problem [12] . It makes the assumption that delay affects several gates in the local region of occurrence, and also that the

buf_low y

buf_low

buf_low

buf_low

buf_low

buf_low

buf_low

buf_low

Q CLK RESET Q0

Q CLK RESET Q1

Q CLK RESET

Q CLK RESET Q3

Q CLK RESET

Q CLK RESET Q5

Q CLK RESET Q6

Q CLK RESET Q7

Q2

Q4

x buf reset buf buf buf buf buf buf buf

Figure 3. Vernier Delay Line segment delay fault is large enough to cause a delay fault on all paths that include the segment. The upper bound on the length of the segment is the maximum number of gates in the longest path. Transition, gate and line delay fault models are used to represent delays lumped at gates whereas path and segment delay fault models are used to represent delays distributed across the chip. this scan also has the disadvantage of greater power dissipation as generally there is more switching during scan than in normal operation. Thus it is common to use a slow clock for scan to reduce the average power dissipation. Partial scan where only some of the ops are converted to scan ops is also sometimes used in sequential circuits in place of full scan techniques.

4. Built in Self Test 3 Scan Chains and Design for Testability


Design for Testability (DFT) is a technique to reduce difculty of testing by adding/modifying some hardware on chip. The scan DFT methodology [7] is a standard DFT practice followed by industry. In this technique the sequential storage elements (ip-ops/latches) are connected in a manner that allows two modes of operation. In normal mode i.e. when the chip is performing it regular function the storage elements take their stimulus from a combinational logic and the response feeds into a combinational logic. In test mode the storage elements are recongured as one or more shift registers, and each such conguration is known as a scan chain. The stumuli vector which need to be applied in test mode can be shifted in serially into the scan chain. The chip is then allowed to function in normal mode and the responses for a test vector are captured in the storage elements. The response obtained can be shifted out and compared with golden reference responses in order to test the chip for functional correctness. The use of scan design has two penalties. Firstly and foremost there is an area overhead due to the added scan ops. There is also the performance overhead caused by the on-path multiplexors in the scan ops. Besides the area overhead of the scan ops themselves added routing furhter adds to the area. In typical designs both the area and performance overhead due to scan insertion is about 5 to 10%. As scan chains have no logic in the scan path there is a chance of a race condition if the skew on the second clock signal is large so has to be taken in designing scan chains. Besides Built-In Self Test (BIST)[22] refers to techniques and circuit congurations that enable a chip to test itself. In this methodology, test patterns are generated and test responses are analyzed on chip. Pseudorandom patter generator logic was shown to be able to greatly reduce test data volume by sifting out the easily detectable faults [30]. Test application and test data compression techniques using embedded processors [15] or reusing on-chip processors [11] [31] [14] have been suggested. BIST techniques have gained acceptance for testing complex digital designs. These techniques involve modication of the hardware on the chip such that the chip has the capability to test itself. BIST offers various advantages over other testing techniques including the use of automatic test equipment (ATE) [16]. First, the test circuitry is incorporated on chip and no external tester is required. Second, the test can be performed at normal clock rate. Third, the tests can be performed even after the chip has been incorporated in the system which can enable periodic testing. The two disadvantages of BIST are area overhead and performance penalty. Incorporation of the self test capability requires addition of hardware to the chip. This increases the silicon area required to implement the chip hence increasing the cost of manufacturing chips. Also, the hardware added to the circuit increases the delays of the normal circuit path decreasing the speed at which the circuit can be used during its normal operation. The use of BIST architectures for embedded fast multiplier cores is the best solution [26]. It permits at speed testing, provides very high fault coverage and drives down the testing cost.

5. On-Chip Delay Measurement Strategies


Sevaral methods have been proposed for on-chip delay measurement and delay fault testing. In [8] a sampling circuit is proposed based on two clocks which are 180 phase shifted with each other. However the method requires ratioed capacitors and fails if skews happen in transitions of the two clocks. Such behavior is expected due to existence of clock skews. Also this sampling circuit can detect only delay faults where the the delayed transition on a path occurs after a the sampling time and hence is rendered ineffective. Franco and McCluskey [9]propose a DFT technique to detect delay faults using transient switching currents in CMOS inverters. However this scheme has a low noise margin which hinders its fault detection capability. It is also a dynamic circuit hence suffers from high power overhead due to switching. In [27] a DFT technique based on capacitor voltage level has been proposed. The scheme requires determination of a threshold voltage and is too complex for practical use. Several methods have been proposed for on-chip delay measurement based on digitizing short intervals of time. These include a shift register/fast counter based Time-toDigital Converter (TDC) [32], oscillator based TDC [28] and various CMOS tapped delay line congurations [6], [28],[20],[10], [29]. Analog methods based on voltage ramp generation have also been proposed [34] where the the voltage on a capacitor is proportional to the time difference between two rising edges. Delay line congurations are benecial [6] in that they do not require anything more than a standard digital CMOS process, advantages of which are lower cost, lower power dissipation, higher integration level, higher noise margins and a large set of existing Computer Aided Design (CAD) tools. However minimum

Edge triggered latching elements at every stage are clocked by propagating x and latching in the value of propagating y when the latch is transparent. The stage/latch number n in the delay line where x catches up with y indicates the range of time difference between x and y signals. The event of x catching up with y is indicated by the presence of the rst 1 among the ops in the delay line. All subsequent stages will latch in a 1. The range of the delay difference between the two signals is then given as,
%# $    1%  2   ' (1% (&  ) %# $  )   ' 0 (& 

In theory, any difference between the two signals can be measured by making the resolution as low as possible. However in practice minimum resolution is limited by factors like mismatch of transistors, delay mismatch due to loading, length of the delay line. In deep submicron designs additional factors like process variations, noise, crosstalk etc also affect the resolution. Additionally the range of the VDL i.e. the maximum time difference between two signals that can be measured is limited to N*t , where N is the total number of stages in the VDL. Improvements have been proposed for improving both resolution of delay lines [29] and increasing the range of the VDL using Delay Locked Loops [6]. DL has two delay lines, and measures range of delay between two signals. VDL schemes suffer from overhead of reading out delay values and inability to test paths for all possible transitions. A Modied Vernier Delay Line
  

 

 

Various Design-for-Testability techniques have been proposed, but each approach assumes specic implementations of the multiplier cells. Some techniques provide designs for generalized implementations, but the test sets of these approaches are meant to be externally stored and applied to the multiplier under test and the output response externally evaluated. The need for a testing scheme that works for all multiplier cell implementations without requiring a high controllability and observability is evident. We evaluated the effectiveness of various BIST scehemes for multipliers and implemented them on 8x8, 16x16 Wallace as well as Dadda multipliers. Two different implementations were done, differing only in the kind of PG. One used a constant size counter [26]and the other used an LFSR of the size of the multiplier input. An 8 bit counter/LFSR was used to generate the inputs and the product driven into an MISR. The selection between the test patterns and the input patterns is done using a multiplexer.

achievable resolution of a TDC based on a single delay line is limited by the minimum gate delay in the technology it is implemented in. This can be overcome by using a balanced delay line called a Vernier Delay Line (VDL) [6], [10], [29], based on the Vernier principle. The Vernier principle is derived from a measurement tool called Vernier caliper. This tool measures the length of an object by placing it between its two jaws. One of the two jaws is movable and the other is xed. As the movable jaw slides an indicator mark shows the distance between the jaws on a calibrated scale. VDL has two delay lines, and measures range of delay between two signals. The basic scheme for the Vernier Delay Line (VDL) is shown in Figure 3 [6] [10] [29]. It consists of two delay buffer chains with the delay of individual buffers in the lower chain (t ) greater than the delay of individual buffers in the upper buffer chain (t ). The rst arriving signal is fed to the input x of the lower buffer chain and the late arriving signal is fed to the input y of the upper buffer chain. As x and y propagate through their respective delay chains, the time difference between the two signals is reduced in every stage by an amount which equals the difference in delay of individual buffers in the respective chains. This is basically the resolution of the VDL, i.e.
 ! "      

Addend

1outof3 encoding 1outof3 encoding

Augend

Intermediate Sum & Carry

Final Decoder Sum Sum

Figure 4. Block Diagram for Error Detecting Addition (MVDL) that can be used to characterize critical path delays was proposed in [5]. This scheme overcame the overheads of VDL by applying an efcient readout scheme and a scheme to handle all possible worst case transitions. Existing VDL schemes can measure delay between only two incoming pulses or rising transitions. However in delay fault testing, the worst case delays in paths could be due to other different types of transitions. A scheme to modify the VDL in order to handle all possible transitions on input and output is presented in this section. Flops with set-reset capability are used to handle such transitions. Experimental results for delay measurement using MVDL closely matched with those obtained using a well known commercial timing analysis tool [35]. Reading out of the values stored in the delay line is a crucial task and most proposed solutions for reading out the latched VDL values involve tremendous hardware overhead like a separate asynchronous read-out architecture using registers [6]. We have resolved this issue in the MVDL by using a readout scheme which has minimal hardware and pin overhead.
16 16

4 bit 4:1 mux


4

4 bit 4:1 mux


4

2:1 mux

ci

ci

ci

4 bit adder
4

4 bit adder
4

4 bit adder
4

1 bit reg

co

co

co

4 bit voter
4

1 bit voter

Figure 5. 16-Bit Quadruple Time Redundancy Adder

6 On-Line Testing
The continual march to ever smaller dimensions brings with it an ever increasing probability of transient errors occurring within circuits. Just as memories include redundant circuitry today, it may become necessary to protect datapath circuitry with error detection and even error correction in the future. Described in this section is a method of error detection applicable to extremely fast multiplication and division circuits, followed by a more general error correction method applicable to many arithmetic circuits. The redundancy required to detect errors often imposes a delay penalty on the circuit it protects. In [36], a technique was described that provides error detection while minimizing the delay impact through the use of signed digit arithmetic. These signed digits are represented using 1-outof-3 code words, thus the code word set represents the digit set . Both signed magnitude and twos complement operands can be converted into this signed digit representation. 1-out-of-3 checkers are used to detect errors in the circuit during the computatation. A

block diagram is shown in Figure 4. Error correction may be required in addition to error detecting circuits. However, one of the most straightforward methods of providing error correction within a circuit, Triple Modular Redundancy (TMR), requires a hardware redundancy overhead of more than 200%. Quadruple Time Redundancy (QTR) employs the same fault masking methodology as TMR but uses a combination of time redundancy, as well as hardware redundancy, to achieve this effect. QTR divides the functional portion of an arithmetic circuit into fourths. Three of these smaller portions are instantiated in a TMR conguration. Multiplexors are used to apply one-fourth of the operands to the circuit in each iteration. After four iterations, the entire output is available. For 64-bit addition QTR requires only 32% hardware overhead and 30% delay over a non-redundant circuit, compared to a hardware overhead of 256% for a similarly sized TMR circuit [37]. The block diagram for a 16-bit QTR adder is shown in Figure 5.

 

7. Conclusions and Future Work


In this paper we have presented an insight into existing test and debug practices in Deep Submicron technologies.

 

Techniques for on chip subnanosecond capture of signals for delay fault testing are presented. Future research will focus on optimizing critical path selection for delay test and debug, and test generation techniques for delay fault testing.

References
[1] K.-T. Cheng, S. Dey, M. Rodgers, and K. Roy. Test Challenges for Deep Sub-Micron Technologies. In Design Automation Conference, pages 142149. ACM, June 2000. [2] D. Chinnery and K. Keutzer. Closing the Gap Between ASIC and Custom: An ASIC Perspective. In Design Automation Conference, pages 637642. ACM, June 2000. [3] W. Dai and H. Ji. Timing Analysis Taking into Account Interconnect Process Variation. In International Workshop on Statistical Methodology, pages 5153. IEEE, June 2001. [4] R. Datta, J. A. Abraham, R. Montoye, W. Belluomini, C. Mcdowell, H. Ngo, J. Kuang, and K. Nowka. A Low Latency Low Power 4-to-2 Carry Save Adder. In to appear in International Symposium on Circuits and Systems. [5] R. Datta, A. Sebastine, A. Raghunathan, and J. A. Abraham. On-Chip Delay Measurement for Silicon Debug. In (submitted to) Great Lakes Symposium on VLSI. [6] P. Dudek, S. Szczepanski, and J. V. Hateld. A HighResolution CMOS Time-to-Digital Converter utilizing a Vernier Delay Line. IEEE Transactions on Solid State Circuits, 35(2):240247, February 2000. [7] E. Eichelberger and T. Williams. A Logic Design Structure for Design for Testability. In Design Automation Conference, pages 462468. ACM, June 1977. [8] M. Favalli, P. Olivo, and M. Damiani. Novel Design for Testability Schemes for CMOS ICs. IEEE Journal of Solid State Circuits, 25(5):12391246, October 1990. [9] P. Franco and E. J. McCluskey. Delay Testing of Digital Circuits by Output Waveform Analysis. In International Test Conference, pages 798807. IEEE, October 1991. [10] M. Gorbics, J. Kelly, K. Roberts, and R. Sumner. A HighResolution Multihit Time-to-Digital Converter Integrated Circuit. IEEE Transactions on Nuclear Science, 44(3):379 384, June 1997. [11] S. Hellebrand, H.-J. WUnderlich, and A. Hertwig. Mixedmode BIST using Embedded Processors. In International Test Conference, pages 195204. IEEE, October 1996. [12] K. Heragu, J. Patel, and V. Agrawal. Segment Delay Faults: A New Fault Model. In VLSI Test Symposium, pages 3239. IEEE, May 1996. [13] E. Hsieh, R. Rasmussen, L. Vidunas, and W. Davis. Delay Test Generation. In Design Automation Conference, pages 486491. ACM, June 1977. [14] S. Hwang and J. A. Abraham. Reuse of Addressable System Bus for SOC Testing. In ASIC/SOC Conference, pages 215 219. IEEE, September 2001. [15] S. Hwang and J. A. Abraham. Test Data Compression and Test Time Reduction Using an Embedded Microprocessor. IEEE Transaction on VLSI, 11(5):853862, October 2003. [16] N. Jha and S. Gupta. Testing of Digital Systems. Cambridge University Press, Cambridge, U.K., 2003.

[17] A. Krishnamachary and J. A. Abraham. Test Generation for Resistive Open Defects in CMOS. In Great Lakes Symposium on VLSI, pages 6570. ACM, April 2002. [18] A. Krstic and K. Cheng. Delay Fault Testing for VLSI Circuits. Kluwer Academic Publishers, Boston, Massachusetts, 1998. [19] Y. Levendel and P. Menon. Transition Faults in Combinational Circuits: Input Transition Test Generation and Fault Simulation . In International Fault Tolerant Computing Symposium, pages 278283. IEEE, July 1986. [20] C. Ljuslin, J. Christiansen, A. Marchioro, and O. Klingsheim. An Integrated 16-channel CMOS Time to Digital Converter. IEEE Transactions on Nuclear Science, 41(4):11041108, August 1994. [21] A. Majhi, J. Jacob, L. Patnaik, and V. Agrawal. Delay Fault Models and Coverage. In International Conference on VLSI Design, pages 364369. IEEE, January 1998. [22] E. J. McCluskey, S. Makar, S. Mourad, and K. Wagner. Probability Models for Pseudorandom Test Sequences. IEEE Transactions on Computer Aided Design, 7(1):6874, January 1998. [23] M. Nicolaidis. Design for Soft-Error Robustness to rescue Deep Submicron Scaling. In International Test Conference, page 1140. IEEE, October 1998. [24] M. Orshansky. Increasing Circuit Performance through Statistical Design Techniques. In Closing the Gap Between ASIC and Custom, pages 323344. Kluwer Academic Publishers, 2002. [25] M. Orshansky and K. Keutzer. A General Probabilistic Framework for Worst Case Timing Analysis. In Design Automation Conference, pages 556561. ACM, June 2002. [26] A. Paschalis, D. Gizopoulos, N. Kranitis, M. Psarakis, and Y. Zorian. An Effective BIST Architecture for fast Multiplier Cores. In Design Automation and Test in Europe Conference, pages 117121. ACM. [27] K. Raahemifar and M. Ahmadi. Design for Testability Techniques for Detecting Delay Faults in CMOS/BiCMOS Logic Families. IEEE Transactions on Circuits and Systems -II, 47(11):12791290, November 2000. [28] T. Rahkonen, J. Kostamovaara, and S. Saynajakangas. CMOS ASIC Devices for the Measurement of Short Time Intervals. In International Symposium on Circuits and Systems, pages 15931596. IEEE, June 1988. [29] T. Rahkonen and J. T. Kostamovaara. The Use of CMOS Delay lines for Digitization of Short Time Intervals. IEEE Journal of Solid State Circuits, 28(8):887894, August 1993. [30] J. Rajski, G. Mrugalski, and J. Tyszer. Comparative Study CA-based PRPGs and LFSRs with Phase Shifters. In VLSI Test Conference, pages 236245. IEEE, April 1999. [31] J. Rajski and J. Tyszer. Arithmetic Built-in Self Test: For Embedded Systems. Prentice-Hall, Englewood Cliffs, NJ, 1998. [32] O. Sasaki, T. Taniguchi, T. Ohska, H. Mori, T. Nonaka, K. Kaminishi, A. Tsukuda, H. Nishimura, M. Takeda, and Y. Kawakami. 1.2ghz GaAs Shift Register IC for DeadTime-Less TDC Application. IEEE Transactions on Nuclear Science, 36(1):512516, February 1989. [33] G. Smith. Model for Delay Faults Based on Paths. In International Test Conference, pages 342349. IEEE, November 1985.

[34] A. Stevens, R. P. Vanberg, J. V. D. Spiegel, and H. H. Williams. A Time-to-Voltage Converter and Analog Memory for Colliding Beam Detectors. IEEE Journal of Solid State Circuits, 24(6):17481752, December 1989. [35] Synopsis Inc. Primetime Reference - Version 2000.11, November 2000. [36] W. J. Townsend, J. A. Abraham, and P. K. Lala. On-Line Error Detecting Constant Delay Adder. In 9th IEEE Intl. On-Line Testing Symp., pages 1722, Kos Island, Greece, July 7-9, 2003. [37] W. J. Townsend, J. A. Abraham, and E. E. Swartzlander, Jr. Quadruple Time Redundancy Adders. In 18th IEEE Intl. Symp. on Defect and Fault Tolerance in VLSI Systems, pages 250256, Cambridge, MA, November 3-5, 2003. [38] B. Vermeulen and S. K. Goel. Design for Debug: Catching Design Errors in Digital Chips. IEEE Design and Test of Computers, 19(3):3745, May-June 2002.

Vous aimerez peut-être aussi