Vous êtes sur la page 1sur 10

DFT Interview Questions and Answers

1. Re: handling reset during transition fault pattern


generation
While doing Transition testing even if you define the reset as a clock it will not harm
anyway.
None of the clocks fault are added in the transition fault model and even if the reset is
defined as a clock you can constrain the same to its off state during ATPG.
And in Tetramax if you dont constrain your asynchronus signals then you get a warning at
the starting of the ATPG (M487)

DFT - Design for Testability


Can we apply pattern generated in ATPG for Stuck fault to IDDq fault or Transition Fault?
If yes, when can we do it? Is it beneficial? Is any extra condition needed for it?
If no, why can't we apply it? What should we do to make it applicable?

Re: DFT - Design for Testability


Yes. You can apply Stuck-At patterns for IDDQ but you can not apply Stuck-At patterns to
Transition.
In transition fault, we have to do transition in the same pattern. Its generally like : Scan in, launch(launching the transition), capture and shift out where in stuck-at shift-in,capture
and shift-out only.
To use Stuck at as a IDDQ is not simple task, because in IDDQ we are measuring current at
the VDD/VSS pins. But Stuck-at patterns are measuring the voltage at Scan out pins. We
have to do conversion.
2nd thing is that , generally design have a high stuck at pattern count but all can not apply
for IDDQ. So we have to do select some patterns for IDDQ from so many stuck at patterns.

Re: transition delay faults and stuck-at faults


Hi,
1. Stuck at fault model : The node is modeled to be stuck at some value 0 or 1 depending
on what we are targeting.
2. Transition fault model : This is considered to stuck at fault model within a time window.
The value of the node changes but not within the time ,at which it should change .For

detecting such faults we have two vector for each pattern one for launching the fault and
the other to capture the fault.
The answer for your queies are below:
1. For both models, fault sites are the same. Both models have diferent aspects to detect
different manufacturing fault( For more details of manufacturing faults and fault models,
please refer mentor's Dastscan/testkompress atpg user guide) .
To detect those faults,they have different capture clock frequncies.
2. Test coverage criteria. We need test coverage >99% to cover the most of fault sites,
which is possible in stuck at than transition.

Re: transition delay faults and stuck-at faults


In DFT, we test for the manufacturing faults. To detect these manufacturing faults there are
different fault models that based on algorithms determine the various faults that can arise
when the chip is gets fabricated in the fabrication labs. stuck-at fault models and transition
fault model are two such models.
Stuck-at fault model (is an infinite delay model, not based on the clock speed of the device).
It tells us that a particular node is either stuck-at 0 (connected to ground/VSS) or stuck-at 1
(connected to power/VDD). Based on this the ATPG tool will perform fault analysis and
determine if this is a tested or ATPG untestable fault (tied, unused, blocked, redundant).
Transition fault model falls under the delay fault model which tells us that a particular node
is able to make a transition from 0->1 value but not from 1->0 and vice-verse. While
running ATPG for transition faults, it is necessary to know the running frequency of the
Device under test (DUT). A transition fault on a line makes the signal change on that line
slow. The two possible faults are slow-to-rise and slow-to-rise.
Transition will come under At-speed testing and stuck-at will come under functional testing.
these need to be tested differently as for transition testing LOC or LOS concept will come
into picture which employs launch and capture vector. In this care is taken such that all
combinational logic elements attain steady state within some clock period. if not then fault
is present.

Re: transition delay faults and stuck-at faults


Originally Posted by ranger01

Macein, You have given a good explanation for the difference between stuck at and
transition faults. But when you can confirm that a node can have transition from high-to-low
and low-to-high at the functional frequency (testing transition faults) doesn't it confirm that
node is definitely not stuck-at any value (which otherwise should have been tested through
stuck-at patterns) ?
Hi Ranger01,
Good question, you made me think...!!
It is important to run separate tests for stuck-at and transition (atspeed) testing. The two
runs will give different test-coverage values, where transition coverage will be 3-4% lower
than stuck-at test-coverage. The total list of faults tested under stuck-at are much greater in
no. than in transition.
In stuck-at testing we shift-in the values, initialize the primary inputs of the combinatorial

logic, observe the outputs,ONLY ONE CAPTURE pulse is given and shift-out the values. The
circuit is already in steady state.
While when we are performing transition testing, we shift-in the values, then pulse two clock
pulses at the functional clock frequency (Primary input changes are also made with the
application of the first clock pulse). The first pulse will launch the transition and the second
pulse will capture the response from the combinatorial portion of the circuit. Thus there is a
time duration in which the circuit needs to respond correctly otherwise there is a defect.
Also a lot of nodes are left out during transition testing, which can only be covered by stuckat testing. Suppose a device has a 2.2 Mhz functional freq, but shows slow-to-rise defect for
some nodes, this defect goes away when we change the clock freq to 2.0 Mhz or when we
are running stuck-at testing. If no stuck-at testing had been performed than it would seem
that the device is not working correctly and the chip/device would come under fail category.
But actually this is a transition defect not stuck-at and the device can still work correctly.
Thus, in scenarios like this both stuck-at and transition tests become necessary and the
device will operate under lowered freq.

Re: Coverage Numbers


I assume that by "ATPG" coverage, you mean "stuck-at" fault coverage, and if so, the
answer is fairly straight-forward: stuck-at coverage detects a more catastrophic fault,
whereas the transition delay fault must be able to detect both a slow-to-rise or slow-to-fall which are both more difficult to sensitize and propagate, because the tool must set up a
transition, not just a static faulty value.

shift capture mode dft scan


Here are few more with my answers... ( DISCLAIMER NOTICE ... )
(*) Whats the difference between structural and functional vectors.
http://www.eetasia.com/ARTICLES/2004..._ICT_ST_TA.pdf

(*) What the major problem faced in dft with tri-state buffers and how is it resolved.
1. The major problem is from the tester end, not all testers are able to measure Z.
2. For the IDDQ vectors, there can be no Z in the design, there is quite a lot of current when
a pin is in Z state. A floating bus that is a bus with z on it will drain too much of current and
hence loosing the objective of the iddq vectors.
3. Next these tri-state buffers are generally used for sharing the bus, so there has to be
a dft logic so there is no contention on these bus during test.
(*) Give three conditions where bus contention can occur.

1. During the shift ( i.e load of the scan chains )


2, During the forcing of the PIs
3. After the capture clock , the flops may capture any thing which may lead to the
contentions.
* Which is advantageous, launch at shift or capture launch.
http://www.stridge.com/Stridge_artic...an_testing.htm
http://scholar.lib.vt.edu/theses/ava...ricted/etd.pdf
Also a new technique is being used now is to pipe line the scan enable, and negating the
scan enable in advance so that by the time of capture is to be done, the scan enable is low.
* P1500 funda!
It is similar to BSDA but at the chip level , instead of the board level. The major difference is
that in the BSDA
we are sure that the chips are OK and then do the board testing. But in the case of P1500 ,
we are not sure of anything, each and every core has to be tested.

* How to achieve high fault coverage. How to increase it.


1. 100% scan design
2. More number of test points
3. No Xs in the design
4. Use sequential patterns
5. Completely defined netlist, i.e there should be no floating outputs, or un connected
inputs.
6. There should be logic to certain that there would be no contentions on the bus
7. Avoid floating bus using bus keepers.
* latch - how is it used in dft for sync two clock domains.
Latches are used as lockup latches in the scan chains. These are inserted in the chains
where ever there is a change in the clock domain. By clock domain we mean, two clocks or
the same clock with phase difference.
Let us have a condition here to explain the things; we have a design with 2 clocks CLK1 and
CLK2. There is a single chain in the design, which means that the scan chain have flops
which can be clocked by either of the clock.
The tool by default will order the flops in the scan chain such that first we have one clock
domain's flop followed by the other domain flops. Let us consider that the CLK2 flops follows
CLK1 flops.
Now consider the flop which is at the boundary that is the one where the output of the
CLK1's flop is going to the CLK2's scan_in. Clock skew between these successive scan-

storage cells must be less than the propagation delay between the scan output of the first
storage cell and the scan input of the next storage cell. Otherwise, data slippage may occur.
Thus, data that latches into the last flop of CLK1 also latches into the first flop of CLK2. This
situation results in an error because the CLK2's flop should latch the CLK1's "old" data
rather than its "new" data.
To overcome this issue we add the lock up latch where ever there are clock domain crossing.
In our example we would add a lock-up latch which has an active high enable and is being
controlled by inverted of CLK1. Thus becomes transparent only when CLKA goes low and
effectively adds a half clock of hold time to the output of the last flip-flop of clock domain
CLK1.

* Fault types
The different fault types are
1. Stuck at fault model : The node is modeled to be stuck at some value 0 or 1 depending
on what we are targeting.
2. Iddq fault model : This is similar to the stuck at fault model but here instead of
measuring the voltage we measure the current . In a CMOS design at the quiescent state,
ideally there is suppose to no current in the silicon, if there is current then some node has
either shorted to ground or to the power.
3. Transition fault model : This is considered to stuck at fault model within a time window.
The value of the node changes but not within the time ,at which it should change .For
detecting such faults we have two vector for each pattern one for launching the fault and
the other to capture the fault. The time between the launch and the capture is supposed to
be equal to the time at which the chip would normally function. This is the reason it is all
called at-speed test.
4. Path delay fault model : In this fault model ,instead of concentrating on a single gate of
the netlist ,we generally are concern with a collection of gates which forms a valid path in
the design. These are generally the critical paths of the design. Here again we have two
vectors for each pattern. Do let me know if you know what is a valid path ( don't feel
offended I am just writing this because you are out of touch with all these technical jargons
since long , otherwise I hope you must be knowing them).
The transition faults are also measured at the paths ends, but the major difference between
the transition and the path delay is that in the path delay we give the path where as in the
case of transition the tool itself selects the path for give fault.
The fault location for IDDQ, stuck-at and transition are same.
5. Bridging fault model : this is a new model which is gaining importance . In this case any
two close lying net may effect the value of each other. There is generally a victim and
another is a aggressor, so an aggressor forces some value on the victim . We first find the
coupling capacitance of each net pair, then depending on the specs we may select some
nets which have coupling capacitance more then specified value, these are selected and

then these become the fault locations for the ATPG.

* Does the dft vectors test the functionality of the design also?
No the dft vectors does not test the functionality of the design. It can be otherwise that is
we can use the functional vector to test fault grade them and use the same for finding the
fault coverage using these vectors. The dft vectors are generated keeping the design in test
mode , so they won't be beneficial for the functional mode. But note this that there may
always be an overlap in the patterns.
* How do u break combinational loops. (*)How does introducing TIEX will eliminate
combinational loop. [ I told him by forcing known value we can break the loop]
By adding a tiex gate we can break the combinational loop. First what is a combinational
loop. The value is not stabilized, there are oscillation. So if we place a X gate at some place
in the loop, we are not propagating the deterministic value which was causing the
oscillations.
Adverse Effect : Any X in the design would reduce the coverage.
The second solution would be to place a buffer with unit delay. In this case you would
require sequential patterns. Please note that we are not placing any Tiex or buffer with unit
delay in the netlist, it is just that we are telling ur ATPG tool to model them for the ATPG
purpose. So you won't see any tiex or buf with unit delay gates in the netlist.
* What is scannability checking.
I think this relates to the scan chain integrity. The first pattern that is pattern0 in most of
the ATPG tool is called the chain test pattern. This pattern is used to check the integrity of
the scan chains, to see if the scan chains are shifting and loading properly; if the scan
chains itself have a fault there is no use checking the full chip using this chain.
* Give three Clock drc rules and how to fix them.
1. Clock not controllable from the top. ( Use mux to controll the same)
2. When all the clocks are in off state , the latches should be transparent ( add logic to
make them transparent)
3. A clock must not capture data into a level sensitive (LS) port (latch or RAM) if that data
may be affected by new captured data. ( for FASTSCAN : clock_off simulation on and for
TetraMAX : set atpg -resim_basic_scan_pattern
)
* What does test procedure files have?

The test procedure file contains all the scan information of your test ready netlist.
1. The number of the scan chains
2. The number of scan cells in each scan chain.
3. The shift clocks.
4. The capture clocks
5. Any test setup procedure required before starting the test pattern generation
6. The timing of the different clocks.
7. The time for forcing the Primary input , bidi inputs , scan inputs etc
8. The time to measure the primary outputs, scan outputs , etc ..
9. The pins which have to be held at some state in the different procedure as load_unload,
shift etc ..
(*) What problems u faced while inserting test points.
The problems u faces while inserting test points ,
I don't think there is any problem, except
1. Selecting the best candidate location for the test points.
2. Area over head

(*) If enable pin of tri-state is 0, the output is Z. how does tool treat this Z as in DFT. How
is Z handled.
It depends on the tester. We can customize the tool to generate the patterns :D

Subject: RAM sequential pattern details

RAM sequential patterns are normally used to test very small memories for which you do
not want to put memory BIST logic. It can also be used to test logic around memories, but
normally a wrapper around memory, such that Memory output is bypassed with Memory
Input signals, is used to test logic around memories.
During RAM sequential patterns, you need to make sure that memories can be enabled,
read enable can be set and Valid data can be put on data and address bits using SCAN
Flops. Moreover it should also be made sure that during shift phase data written in memory
should not get corrupted, i.e. Write Enable should not toggle during scan shifting.
Generating Ram seq patterns can be tricky at times - as the user needs to define the read
and write cycles properly....

As an alternative to Ram sequential patterns you can try generating clock seq patterns to
test the shadow logic around the memories.
You can do the following things
1. Increase the sequential depth for the design
2. Also (for fastscan) you need to define the read and write clocks for the memories
Commands:
add read control
add write control
The clock_name mentioned above is the clock that is used to clock the memories during
scan mode. for more details you can refer to the tool document.
Another important thing to take care before generating clock seq patterns ...... make sure
you have the ATPG (functional) models for the memories.

Internal Scan Modes : Trinity and X9

Bist wrapper cells (* some) and clock gating cells (* latches) are not the pat of scan
chain

HITEST2:

Compression + Wrapper chains

HITEST1:

Only compression. No wrapper chains, so no Extest

FLAT :

Only BYPASS.

TransByp : along with extest bypass chains, there will be some feed through chains
will also be traced.
Chain ExtestByp_mode_i_1_feed_through_chain successfully traced with 4
scan_cells.

SNPS_head + head + tail + SNPS tail flop

TransCmp: Chain ExtestByp_mode_i_1_feed_through_chain successfully traced with


2 scan_cells

head +tail flop

Dont generate tdf patterns for LSI_PIPELINE_SCAN_CLOCK clock, as it is only


controlling the compressor logic

TDF

In tdf we are targeting one clock at a time. So add faults which are only associated
with the clock

Faults from
launch clk1 to capture clk1
Launch PI to capture clk1
Launch clk1 to capture PO

Full flop name will show in No unified flow (stildpv)

==================================

https://solvnet.synopsys.com/retrieve/018868.html?
otSearchResultSrc=advSearch&otSearchResultNumber=4&otPageNum=1>

Vous aimerez peut-être aussi