Académique Documents
Professionnel Documents
Culture Documents
ABSTRACT
Physically aware test adds a dimension of quality to production test. Care must be taken to select the
most critical structures with the least amount of effort and be able to translate those into patterns that
best complement, stuck-at (SA) and transition patterns. This paper presents an approach to effectively
utilize existing layout data to identify and extract critical geometries as path pairs to be provided to
TetraMAX ATPG for pattern generation. This provides the foundation to be merged with traditional
stuck-at and transition patterns, creating a high-quality test with attention paid to the most critical
areas of layout.
Table of Contents
1
INTRODUCTION ............................................................................................................................. 3
EXPERIMENTS .............................................................................................................................. 13
RESULTS ........................................................................................................................................ 15
ACKNOWLEDGEMENTS ............................................................................................................. 19
APPENDIX...................................................................................................................................... 20
REFERENCES ................................................................................................................................ 24
Table of Figures
Figure 1: Hard Short from Low-Impedance Bridge [6] ........................................................................... 6
Figure 2: Resistive Short from Medium-Impedance Bridge [6] .............................................................. 6
Figure 3: Bridge Fault from High-Impedance Connection [6]................................................................. 6
Figure 4: Problematic Topologies ............................................................................................................ 7
Figure 5: Bridge Fault Victim/Aggressor Node Values ........................................................................... 9
Figure 6: Dynamic Bridge Fault Detection Waveforms for Launch on System Clock ......................... 10
Figure 7: LVS-based Path Pair Extraction Flow .................................................................................... 11
Figure 8: Sample Command File for Capacitive Extraction .................................................................. 12
Figure 9: Fault-Grade Flow .................................................................................................................... 14
Figure 10: Process Cost Standard vs. Fault-Graded Flow...................................................................... 16
Figure 11: N-Detect Coverage Impact ................................................................................................... 16
Figure 12: Potential Bridge Coverage Using N-Detect as Basis ............................................................ 17
Figure 13: Static N-Detect Pattern Impact ............................................................................................. 17
Table of Tables
Table 1: Standard Flow Results.............................................................................................................. 15
Table 2: Fault-Graded Flow ................................................................................................................... 15
SNUG 2009
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
1 Introduction
High-density nanometer designs result in fragile, tightly packed wires susceptible to bridging defects [1]. Bridging faults have been around as long as metal has been deposited on silicon wafers.
Chemical etch with photomask lithography has been commonly used to create signal paths within
and between layers of insulating materials. Vias provide connections between layers and represent
issues in and of themselves. The proximity and density of metal affect the etch rate and can result
in a non-uniform etch across the surface of the die under the same conditions of temperature, agitation, and acid concentration. The result is incompletely formed wires or wires only partially connected to adjacent metal areas. Residue may be left that can form resistive paths between wires,
anywhere from a few ohms for a hard short, to kilo-ohms for a thinner bridge [2]. Defects present
during the litho/etch process can cause shaded areas that can also result in under-etch between signal paths. Finally, processes intended to level each layer between process steps can cause metal
smears from dishing of the surface caused by irregular distribution of metal in lower levels.
The result is a risk of residual bridges between signal paths and other metal structures, including
signal paths, power, ground, and filler metals. Some of these bridges can be detected with traditional stuck-at and transition tests. However, without the proper bias between affected wires, some
bridge defects can be masked and passed on as quality material. (Still others may fall in areas
where there is no path to an observation point, but this is a topic for another discussion.)
Capacitance calculations have been used in the past to highlight critically spaced structures in a design [3]. This assumes that any two metal structures will have a significant capacitance if they are
too close or in proximity to each other for a great distance. The traditional parallel plate capacitance model can be used where the area of the plates is the effective face of the adjacent structures.
The model holds well for long parallel wires but not as well for closely spaced vias, or wires approaching each other end-to-end or corner-to-corner, to name a few structures. The tighter the
wires are packed together, the higher the chance of a variety of structures coming in critical proximity to each other. It is therefore important to ensure that test patterns provide sufficient coverage
on the most critically spaced areas of the design.
There are also many ways to approach this problem, so the method discussed here may not suit all
designs or product cost models. Much has been said regarding fortuitous coverage of N-detect [4],
as an example, for both bridges and short-delay defects. It is cheap and relatively easy to run.
However, it leaves coverage of specific areas to chance. Random fill, by its nature, is designed to
cover large sections of circuitry statistically, with little or no knowledge of its function or critical
design parameters. Fortuitous coverage can change from one ATPG run to the next, or as circuits
evolve, so coverage has to be re-assessed each time a pattern set is generated.
Test cost drives many test decisions; this is as true in the structural test world as it was when functional test dominated the landscape. Cost factors include fault site extraction effort, ATPG processing time, pattern volume, memory usage, and so forth. To get the most out of targeted coverage
SNUG 2009
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
without driving test costs through the roof, a balance must be found between a fully targeted approach and a fully random one. For instance, it is not uncommon to be able to generate many more
potential bridge fault sites than stuck-at faults thereby exceeding ATE memory capacity to hold the
patterns needed to detect them. Another cost driver is duplicating or repeating design steps. The
idea here is to utilize data already available from layout and tap into existing process steps to extract the data needed for ATPG. Pattern volume equates to ATE run time, so overall pattern count
needs to be kept to a minimum. This can be done in part through fault grading, but also by selecting the proper order of bridge, stuck-at, and transition test generation. One may assume that topping random patterns with targeted ones would result in the same coverage and pattern size as topping off targeted patterns with random ones. If so, you may be missing additional coverage opportunities.
In the remainder of this paper, Section 2 will discuss the nature of bridge faults and bridge detection techniques. Section 3 will describe the process of parameter-based fault site extraction from
the physical database, sorting results and porting to TetraMAX ATPG. Section 4 will describe the
experiments used to assess tool and process effectiveness under different use conditions utilizing
industry test case circuits. Section 5 shows experimental results from pattern generation comparing
coverage, pattern size, and run time for different scenarios. Section 6 will summarize the conclusions and recommendations for applying the process.
SNUG 2009
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
2.2
This population involves two dynamic signals, such as adjacent lines of a data bus. It is similar to
a SA fault except that the opposing potential is changing, so a simple SA model is insufficient.
Both line potentials have to be accounted for to set up the Victim/Aggressor relationship properly. If the signals are randomly pulling against each other, there may be a chance of detecting an
error. On the other hand, if the signals are correlated in some way, there is a chance that the two
signals are in phase and neither observation point will detect an error. Bridge detection requires
the two signals to be out of phase during capture; whether one is static and the other dynamic, or
both dynamic, they have to work against each other to produce a detectable time or amplitude
shift.
2.3
A signal line coupled to a fixed potential is detectable with traditional stuck-at test and has the
same requirements for controllability and observability as other SA faults. These bridges may
couple a net with a power/ground plane, a voltage reference, or other fixed potential structure.
No special methodology is required to detect them. However, for known problematic topologies,
it makes sense to ensure adequate coverage by specifying the at-risk net during pattern generation or by verifying coverage by a random pattern set after generation.
2.4
Bridges will occur in a range of resistances. The higher the resistance of the bridge, the less coupling it provides between structures. They can range from hard shorts, as shown in Figure 1, to a
mere wisp of a dendrite (Figure 3). The stronger the coupling between signal lines, the higher the
chance of signal corruption. In long parallel wires, there is also the possibility of AC coupling
due to mutual inductance and metal-to-metal capacitance. While it is theoretically possible to excite two AC-coupled lines in a way to cause a failure, this is beyond the scope of this paper and
the current state of ATPG software. However, as the methodology evolves and improves, the
possibility exists for the capability in the future. For now, those types of design-related mechanisms need to be eliminated through good design practices. Keep in mind that bridges and particulate contamination also pose a risk of latent failures, which are expensive to fix in terms of
time, engineering resources, and customer satisfaction. Reasonable effort to detect and screen
these should be applied, but recognize that there has never been a perfect piece of silicon built in
the history of the industry, so you must weigh the expense and effort of ultra-sensitive detection
against the benefits. Again, historical quality and field failure data are the best indicators of
where to draw the line.
SNUG 2009
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
Less defined bridges have greater impedance between nets than hard shorts but still result in
signal interaction. These do not act like stuck-at
faults, so -- even if one line is a power bus -- the
driver may be able to pull against it, appearing
to be good. Dynamic test is the best way to detect these because they impede transition rates
by adding extra load to the Victim line.
90 nm SRAM process monitor
Even the lightest coupling can affect highspeed circuits. These produce minimal additional load but have a measurable affect on
transition rates. Timing-aware (TA) transition
test offers the highest level of sensitivity for detecting these faults. TA incorporates static timing analysis data for least slack path selection,
resulting in the highest level of sensitivity.
SNUG 2009
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
2.5
At-risk topologies
The short version of risk is that any two metal structures in close proximity to each other represent risks of incomplete separation during etch. For example, metal lines that run in parallel for
long distances, tightly spaced lines, features that approach each other at sharp angles and end-toend configuration, have shown to be problematic for etch. Similarly, vias can present problems in
close proximity to each other and other metal structures. The structures shown in Figure 4 are
representative of problematic topologies in general. Your specific high-risk structures may be
unique. Historical failure analysis data is the best way to determine which structures are most
detrimental to product quality.
Side-to-side proximity has greater likelihood
of bridging for closely spaced lines and long
parallel runs.
Corner-to-corner has localized high density between the outside of the corners. The closer the
corners, the higher the risk of incomplete separation.
2.6
Identification techniques
Electrical identification is a common method that has been used for many generations of technology. It involves calculating the capacitance between metal structures using a parallel plate capacitance model. It is effective at finding critically spaced lines and is as sensitive to the same
special parameters as etch. For instance, capacitance increases the closer two lines get and the
larger the plates are, as in C = KA/D, where A is the plate area, D is the distance between plates,
and K is a function of the dielectric. For relative indications, K can be assumed to be a convenient constant with appropriate thresholds set for fault site tagging and ordering.
Physical extraction goes right to the physical attributes of the device. For structures such as corner-to-corner and end-to-end, the effective plate size is very small, resulting in relatively small
capacitance calculations which, in turn, reduce the probability of detection when in the presence
of parallel structures. Defining the specific geometries of interest not only overcomes this limitation, but also provides a high level of flexibility to define and target the geometries of most interest to your particular situation.
SNUG 2009
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
2.7
Detection of additional defects is enabled by the creation of new fault models at the ATPG level.
Any fault model should be developed and supported by the software in response to new defect
mechanisms, based on silicon testing experiments that show measurable impact on Defective
Parts Per Million (DPPM). Experiments should be thorough and designed to produce results using the standard methods. New fault models are inevitably more complex and will increase testing costs and tool runtimes significantly. These costs should be justified quantitatively, not just
subjectively.
In the quest for lower DPPM, new methods should not significantly reduce yield. For example,
greater-than-at-speed testing and high-switching-activity logic built in self test (BIST) patterns
may cause otherwise good parts to fail unnecessarily.
Even if tests targeting existing fault models are effective at detecting new defect mechanisms, diagnostics may still need to model these new defect types, since the isolation of a defect typically
requires more understanding of its behavior than simply detecting its existence.
TetraMAX (static) bridging faults are a comprehensive model for low-resistance metal shorts.
Dynamic bridging faults complement that coverage for high-resistance metal shorts. A description of the TetraMAX ATPG strength-based bridging fault modeling capabilities is available in
the Synopsys documentation [5].
SNUG 2009
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
process can be given additional soft constraints to optimize the drive strengths after the normal
bridging fault detection requirements are met.
High-resistance bridges between two nets may not manifest themselves in a static test. For instance, if the Aggressor and Victim nodes are strong enough to hold their states individually
against the pull of the other through the bridge, then -- from a static point of view -- they both
have valid logic levels and are both good nodes. However, similar to resistive opens in signal
traces, they may induce a sufficiently long time delay to a dynamic signal to cause data to fail
minimum set-up time on the observation flop. The process of detecting dynamic bridges, or
bridges using a dynamic test, is a mixture of a bridge setup used in static tests and transition pattern generation. Figure 5 shows two output nodes tied together through a resistive short. If we
simplify the Aggressor driver to a pull-up resistor in series with the bridge resistor, you can see
that the circuit collapses into a simple pull-up arrangement, which will increase the speed of a
Aggressor
=
Victim
Agressor = 1
Agressor = 0
SNUG 2009
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
Launch
Capture
System Clock
Scan Enable
Agressor
Induced
Delay
Victim
Figure 6: Dynamic Bridge Fault Detection Waveforms for Launch on System Clock
In addition to the static and dynamic bridging fault model, TetraMAX path delay faults and failure diagnostics provide flexible and powerful features for characterizing and analyzing advanced
defect mechanisms.
SNUG 2009
10
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
Source Netlist
(Verilog)
Source Netlist
(SPICE)
Layout Netlist
(GDSII)
SVDB
Database
Query Macro
Net Pairs
A
T
P
G
ATPG
Bridge Fault
Patterns
P
H
Y
S
I
C
A
L
Composite Optimization
SNUG 2009
11
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
likely reflect common yield issues on process monitors. If you lack specific failure analysis
guidance on your own products, it is the next-best source of information.
The query macro is created by the user. It is used to parse the SVDB database, in this example,
for net pairs. You will find that setting appropriate thresholds on the macros will help to keep
the data volume under control even after trimming the GDS. The general rule of thumb is to extract about the same number of bridge faults as you have stuck-at faults. [5]
3.2 Capacitance calculation process
Synopsys offers a convenient way to extract fault sites based on capacitance modelling [3]. The
command StarXtract starts a Star-RCXT process and is applied to the Milkyway database
(other database formats have similar flow). Unless you are running a small number of fault sites,
the recommendation is to run in batch mode. An example command file taken from the Synopsys
Bridge Fault documentation [5] is shown in Figure 8.
BLOCK: <design name>
MILKYWAY_DATABASE: <milkyway db>
TCAD_GRD_FILE: <nxtgrd file>
MAPPING_FILE: <mapping file>
EXTRACTION: C
COUPLE_TO_GROUND: NO
COUPLING_MULTIPLIER: 1
COUPLING_REPORT_FILE: <output coupling capacitance report file>
COUPLING_ABS_THRESHOLD: 1e-15
COUPLING_REL_THRESHOLD: 0.1
COUPLING_REPORT_NUMBER: <number of nets to include in report>
SNUG 2009
12
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
Experiments
This section describes the experiments run on TetraMAX to assess the cost/benefit of bridge detection over standard scan tests.
4.1 Objective
The objective of the experiments is to determine the most efficient mode to generate and combine patterns incorporating normal static, static bridging, normal transition, and transition bridging runs. The goal is to eliminate excessive overhead in terms of processing time and pattern
count.
SNUG 2009
13
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
expected coverage based on experience, meaning that -- historically -- dynamic test has lower
coverage than static. All bridge patterns were generated first because targeted test generation is
typically less efficient so it should have good opportunistic coverage for standard static and dynamic test. By grading the patterns at each step, previous patterns are maximized for coverage
and additional patterns are kept to a minimum.
In this flow, dynamic bridging was run first to generate an exhaustive pattern set that provides
maximum dynamic bridge coverage. These patterns were saved in binary format and read into
TetraMAX using the set_patterns -external dynamic_bridge.binary
command.
The next fault model run was static bridging. Patterns from dynamic bridging were fault simulated using the run_fault_sim command against the bridging fault set. This process detected static bridges covered by the dynamic bridge patterns. All additional faults detected were
deleted from the total static bridge fault list. Coverage statistics were obtained using report_summaries patterns faults memory_usage cpu_usage. Top-off patterns were then created to maximize coverage for the bridge fault set. These top-off patterns
were appended to the fault-simulated patterns to form a new pattern set using the
set_patterns -external dynamic_bridge.bin append command.
The cumulative pattern set was then loaded externally, targeting a new fault model (a twoclock standard dynamic transition fault model) to perform fault simulation using commands as
previously described. The undetected faults were targeted for standard top-up ATPG, which
generates additional patterns to maximize standard dynamic coverage. These patterns were also
appended to the cumulative fault-simulated patterns.
Finally, the cumulative pattern set was fault graded for the static fault model to eliminate static
faults from the list. We noticed that most of the static faults were detected by fault grading the
existing patterns set obtained so far, with only a few patterns required for top-up stuck patterns
to achieve maximum overall coverage. Figure 9 shows the flow used in this process.
Generate
patterns using
dynamic
bridging fault
model
=> (A patterns)
Fault grade C
patterns for
standard
dynamic
Fault grade A
patterns for
static bridge
faults
Append
A and B
patterns
=> (C patterns)
Append C and
D patterns
=> (E patterns)
Fault grade E
patterns for
standard static
fault and
generate top-up
=> (F patterns)
SNUG 2009
14
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
Results
Three data groups were collected to evaluate the effectiveness and overhead related to bridge fault
tests. The first is the standard flow that considers each fault type (standard static (SA), standard dynamic (TR), static bridging, and dynamic bridging) as individual runs. This established the basis of the
data for comparison to the FGF, which merges each pattern set generated previously into a comprehensive pattern group while maximizing coverage at each step.
Table 1 shows standard flow baseline numbers:
Ckt1
Ckt2
Ckt3
Ckt1
Ckt2
Ckt3
Dynamic Bridge
Static Bridge
Cov
Patt
CPU
Cov
Pattern (Patt)
93.74% 312 85.27 94.93%
212
83.41% 755 290.5 86.29%
423
90.58% 569 112.64 91.98%
365
Standard Dynamic
Standard Static
Cov
Patt
CPU
Cov
Patt (total patt)
96.37% 577
52.28 98.92%
234 (1335)
83.01% 1399 789.25 89.82%
448(3025)
92.6%
857
99.68 96.73%
408(2199)
Table 1: Standard Flow Results
CPU
53.51
99.18
66.27
CPU
12.58
28.79
25.52
This is what you could expect to have for processing time, pattern count, and coverage for each fault
type processed separately.
Next, the same tests were run in this order: dynamic bridging, static bridging, standard dynamic, and
standard static. In each step, the patterns from the previous pass were used to fault grade the current
pass. Then the patterns were topped off to maximize coverage for the current pass. Table 2 reflects the
topped-off coverage and the number of patterns used for top-up:
Ckt1
Ckt2
Ckt3
Ckt1
Ckt2
Ckt3
Dynamic Bridge
Static Bridge
Cov
Patt
CPU
Cov
Pattern (Patt)
CPU
93.74% 312
86.74 94.95%
38
99.49
83.41% 755
282.38 86.32%
59
224.72
90.58% 569
111.46 92.00%
46
116.8
Standard Dynamic
Standard Static
Cov
Patt
CPU
Cov
Patt (total patterns)
CPU
96.44% 549 238.09 98.94%
28 (927)
238.76
83.12% 1446 1519.34 89.92%
49(2309)
384.25
92.76% 877 179.28 96.75%
105(1597)
56.89
Table 2: Fault-Graded Flow
Figure 10 compares results for the two test flows. In each case, the trend is for pattern count to be
much lower for the FGF than for the standard flow. On average, the total savings amount to 30%.
However, the cost associated with fewer patterns is greater processing time, which goes up about 30%
SNUG 2009
15
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
on average. In most cases, the temporary cost of processing time should be more than compensated for
by the improvement in test quality. In addition, comparing the raw coverage numbers for each of the
test passes shows a slight improvement for each in the FGF. Since the increase is consistent across all
test circuits, it is probably not coincidental but a function of the algorithm used in the FGF.
Process Cost
Pattern/CPU Overhead
4000
3500
3000
SF Pattern Count
2500
2000
CPU Time SF
1500
1000
500
0
ckt1
ckt2
ckt3
Coverage %
ckt2
ckt3
SNUG 2009
16
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
What Figure 11 does not show is the bridge coverage left on the table if only the N-Detect patterns
were used. Figure 12 shows the additional coverage if the N-Detect patterns were topped off with
bridge patterns.
Potential Static Bridge Coverage
6
Coverage %
5
4
1
0
ckt1
ckt2
ckt3
(X) N=1
9
8
7
6
N=1
N=5
4
3
N=8
N=3
N=10
2
1
0
ckt1
ckt2
ckt3
SNUG 2009
17
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
The process of fault site extraction can be selected to best suit the needs of the design environment.
Two methods were shown in this paper that can be used on commonly available databases. Alternately, manual entries can be made to the net pair list for known or suspected at-risk sites in the layout.
These can be determined from inspection of the final layout or from experience on previous designs
within the same process family. Different processes may be sensitive to different types of geometries,
so defect data from the target fab and process is important. It should be noted that, while wire-to-wire
capacitance is an easy indicator to understand and implement, it becomes less effective as geometries
shrink and wire densities increase.
The flow used for pattern generation can also influence the cost effectiveness of physically aware tests.
The data showed that the order used to generate patterns can affect the final size of the pattern set with
similar, even slightly better, coverage results. Since time is money when it comes to test, every pattern
represents additional profit or loss for the company. In the examples shown, the most economical flow
was the FGF. This also affords the maximum targeted bridge coverage for the combined pattern set.
Ordering fault sites by severity is also important if pattern size is still a limiter. The physical extraction
process described in this report is currently being modified to have a rank assigned to each site as an
indication of how badly the design rules were violated or, looking at it from the other direction, how
much margin each structure has over the minimum design rules. The rank value is then used as a cutoff point for candidates being targeted by ATPG. Keep in mind that depending on how tight the parametric limits are set, millions of fault sites could be reported. Correctly setting the limits is the first
step to controlling the volume of fault sites, but the rank value can also afford some level of forgiveness in setting the limits too tightly. The absolute severity of each fault becomes insignificant, but the
order remains intact. Picking the top X% of sites, where X represents about as many faults as standard
SA, still provides additional cushion for process variations or dips in process control.
ATPG processing time can be a limiting factor as well. The advantage in pattern count and increase in
coverage of this process was offset somewhat by the cost of additional processing time for fault simulation. It is wise to consider the amount of extra effort to put in to bridge faults. If failure analysis data
does not justify the need, be careful not to do it just because you can. As a minimum, resist the impulse
to include bridges until the design and layout are firm.
The data also clearly showed that N-Detect should not be used just for bridge coverage. If it is incorporated for other reasons, such as small-delay defect detection, it can be fault graded and topped off, as
can other patterns, but the additional patterns generated warrant careful consideration before making
the decision to use it. (Other, more effective, methods exist for creating small delay defect patterns
[7]).
The actual additional quality of physically aware patterns can only be assessed on silicon. At this time,
data is scarce because few companies have volume experience with fine geometry designs. The true
effectiveness is in the measure of unique defects caught with bridge patterns, which are undetectable
by traditional transition or stuck-at patterns alone. Theoretically, it stands to reason that some physical
awareness is better than none, but the cost of the additional coverage is difficult to assess without actual product characterization.
SNUG 2009
18
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
Additional experiments are planned to assess the effectiveness of dynamic bridge tests utilizing slack
data for ultra-sensitive detection. A similar technique is used for Timing-Aware fault detection for
small delay defects (SDD). We will also perform a quantitative comparison of capacitive and layout
based bridge pair extraction techniques to determine if one or the other can be used alone with high
confidence that all risk sites of interest are captured for pattern generation. Finally, as mentioned previously, product test data will be collected and analyzed to determine the unique contribution of bridge
fault testing to product quality.
7 Acknowledgements
We would like to acknowledge the contributions of Sandeep Gandla who spearheaded the layout based
parameter extraction process as a coop at AMD. We also acknowledge technical contributions from
Paul Todaro and Amy Mitby, from Synopsys, on the theory and application of bridge detection techniques. Finally, we are grateful for the expert guidance and contributions of Lori Schramm from Synopsys who helped us work through the process of analyzing, interpreting and presenting the data and
ensuring the most up-to-date tool performance results.
SNUG 2009
19
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
Appendix
TetraMAX ATPG script example for the baseline flow:
SNUG 2009
20
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
SNUG 2009
21
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
faults
SNUG 2009
22
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
faults
SNUG 2009
23
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control
References
[1] Stapper, C.H., Modeling of defects in integrated circuit photolithographic patterns, IBM J. Res.
Develop., Vo1.28, No. 4, July 1984.
[2] Sar-Dessai, et al., Resistive bridge fault modeling, simulation and test generation, Proceedings of
International Test Conference, 1999. pp. 596-605.
[3] Gkatziani, M., Kapur, R., et al., Accurately Determining Bridging Defects from Layout, Proceedings of DDECS07, ISBN: 1-4244-1162-9.
[4] Enamul, M., Venkataraman, S., et al., Evaluation of the Quality of N-Detect Scan ATPG Patterns
on a Processor, Proceedings of International Test Conference, 2004, pp. 669-678.
[5] TetraMAX ATPG User Guide Version B-2008.09-SP2, Chapter 17, pp. 17.1 17.20.
[6] Garteiz, G., Failure Analysis of Functional Failures in a Designed for FA SRAM, Proceedings
from the 30th International Symposium for Testing and Failure Analysis, Nov. 2004.
[7] Roberto Mattiuzzo, Saverio Graniello - STMicroelectronics; Salvatore Talluto, Alfredo Conte,
Adam Cron - Synopsys, Inc. San Jose SNUG 2008
SNUG 2009
24
Creating an Effective Physically Aware Test: Data Mining and Test Volume Control