Vous êtes sur la page 1sur 5

International Journal of Emerging Technology and Advanced Engineering

Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 1, January 2014)

TEST DATA COMPRESSION FOR LOW POWER TESTING


OF VLSI CIRCUITS
Robert Theivadas J 1, Ranganathan Vijayaraghavan 2
1
Anand Institute of Higher Technology: Chennai 603103
2
Dr. Mahalingam College of Engineering and Technology: Pollachi 642003

Usually, the time taken to test a chip relies on the amount


Abstract The two major areas of concern in the testing of of test data that has to be transferred to the chip, and the
VLSI circuits are Test data volume and excessive test power. speed at which the data is transferred. This in turn relies on
Among the many different compression coding schemes
proposed till now, the CCSDS (Consultative Committee for the speed and channel capacity of the tester, and the
Space Data Systems) lossless data compression scheme is one organization and characteristics of the scan chains on the
of the best. This paper discusses the techniques that test data chip. Therefore, from a test economics point of view, test
compression scheme based on lossless data compression Rice time and test storage are the two main areas of concern
Algorithm as recommended by the CCSDS for the reduction when it comes to SOCs [2].
of required test data amount to be stored on the tester, which
will be transferred during manufacturing testing to each core In this paper, a lossless data compression scheme is
in a system-on-a-chip (SOC). In the proposed scheme, the test
vectors for the SOC are compressed by using Rice Algorithm,
presented, which reduces the amount of test data that is to
and by applying various binary encoding techniques. be stored on the tester and then transferred to the chip. In
Experimental results show that the test data compression this, a smaller amount of compressed data is transferred
ratio for the larger ISCAS 89 Benchmark Circuits is from the tester to the core in each test vector, instead of
significantly improved in comparison with existing methods. transferring the entire data. This also takes lesser time than
the time taken to transfer the entire data. The approach
Keywords - CCSDS, Rice Algorithm, SOC, test data discussed here has a significant reduction effect on the test
compression. storage needs and the overall test time.
I. INTRODUCTION For achieving test vector compression, many different
techniques have been published. We have used a technique
It is really difficult to test digital circuits as a based on lossless data compression Rice Algorithm as
bulky amount of test data has to be delivered to the circuit recommended by the CCSDS Recommendation
under test (CUT). It is more difficult to test VLSI chips, compression [1].
because of their complicated functionality and size caused
by increased integration levels of VLSI chips. Furthermore, II. RELATED WORK
testing of VLSI design is quite expensive. Therefore, VLSI
producers aim at reducing the test cost of these circuits. For a long time now, test data compression has
The two important factors contributing to test cost are the been quite an issue. Several compression techniques have
test data volume and test power. been proposed in the bygone years to reduce the test data
volume. Some of them are statistical coding [2], run length
While testing SOCs, the hardest task is handling the huge coding [10], Golomb coding [11], selective Huffman
amount of test data that has to be transferred between the coding [12], Tunstall coding [18], LZWcoding [20],
tester and the chip. A set of test vectors are present in each heterogeneous compression technique [19], FDR coding
core of a SOC that should be applied to the core. During [15], run length Huffman coding [13], multilevel Huffman
modular testing, it is necessary to store these test vectors on coding [14], Bitmask and Dictionary Selection
a tester and then transfer them to the inputs of the core. Methods[17] and variable to variable Huffman coding [16].
Because of the placement of increased number of cores on A comparison between the results obtained from our
a single chip, total amount of test data for the chip is highly technique and the approaches using ISCAS 89 benchmarks
increased. This causes a big problem because of the obtained from MINTEST ATPG [6] has been made in this
expensivenes demerits of automated test equipment (ATE). paper.
Testers have limited speed, memory & channel capacity.
663
International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 1, January 2014)
III. RICE ALGORITHM

Robert F. Rice from NASA developed the Rice


Coding upon which the CCSDS [1] standard is based on. A
lossless source coding technique conserves source data
accuracy and eliminates redundancy in the data source. The
lossless Rice coder has two different functional parts, viz.,
the preprocessor and the adaptive entropy coder its shown
in figure1. The performance measure in the coding bit rate
(bits per sample) of a lossless data compression technique
depends on two important factors. They are the amount of
correlation eliminated among data samples the
preprocessing stage, and the coding efficiency of the
entropy coder. The preprocessor functions in de-correlating
data and reformatting them into non-negative integers with
the preferred probability distribution. The coding option
that performs the best on the current block of samples is
selected by the Adaptive Entropy Coder (AEC). The code
selection is based on the number of bits that the selected
option will use to code the current block of samples. An ID
bit sequence will specify the option used to encode the
accompanying set of code words.

IV. COMPRESSION METHOD

Test vectors are compressed using Rice entropy Figure. 1: Block diagram of Rice algorithm architecture.
coding [1]. The entropy coder first converts the number of
input vectors xi into preprocessor samples i using Test xi yi i=xi- i i
predictor. The preprocessing is done using a predictor, and Vectors yi
then by a prediction error mapper. Based on the predicted 11101000 232 - - - -
value, yi, the prediction error mapper converts each 01110100 116 116 -116 23 139
prediction error value, i, to an n-bit nonnegative integer, 11001000 200 200 84 116 168
i, which is suitable for processing by the entropy coder. 11000000 192 192 -8 55 15
For example, the benchmark circuit S298 MINTEST[6] 11101010 234 234 42 63 84
Vectors are taken, and the predictor is applied to 8 bit data 11100101 229 229 -5 21 9
values from 0 to 255, as shown below. 00001010 10 10 -219 26 245
10111000 184 184 174 10 184
10100111 167 167 -17 71 33
01110101 117 117 -50 88 99
00000010 2 2 -115 117 229
10111111 191 191 189 2 191
11101100 236 236 45 64 90
10000001 129 129 -107 19 126
11011010 218 218 89 126 178
11000110 198 198 -20 37 39
10000011 131 131 -67 57 124

Table1: Preprocessor

664
International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 1, January 2014)
If xmin and xmax respectively represent the minimum and (LSB) from each sample, and encodes the left-out higher
maximum values of any input sample xi, then, the predicted order bits with a simple FS code before prefixing the split
value yi, obviously would lie within this range [that is, in bits to the encoded FS data stream. Have a look at the
between xmin and xmax]. Consequently, the prediction example below.
error value i would surely be one of the 2n values in the
range [xmin yi and xmax yi]. For a selected predictor, it Preprocessed FS Code word
is more likely that there will be small values of |i| than Sample Values, i
large values. Consequently, the prediction error mapping 0 1
function will be as follows: 1 01
2 001
i = 2i 0 i i 3 0001
2|i| 1 i i 0 4 00001
i + |i| otherwise . .
. . .
where i = minimum ( yi xmin, xmax yi), 2n 0000.00001
2n zeros
The entropy coding module is a collection of
variable-length codes operating in parallel onblocks of J Table 2: FS code word example
preprocessed samples. The coding option making the
highest compression is chosen for transmission, along with i Binary K=5 K=6,
an ID code used to recognize the option to the decoder. As, 5LSB+FS 6 LSB+FS Code
a new compression option can be chosen for each block. Code
139 10001011 01011 00001 001011 001
The zero block option is the first option. This is 168 10101000 01000 000001 101000 001
chosen when one or more preprocessed sample blocks are 15 00001111 01111 1 001111 1
zeros. Here, the numbers of adjacent all zero preprocessed 84 01010100 10100 01 010100 01
blocks are encoded by a Fundamental Sequence (FS). 9 00001001 01001 1 001001 1

The second option is called the second extension Table 3: Split Sample option
option. This is designed to generate compression data in the
range of 0.5 bits per sample to 1.5 bits per sample. In this The final option is the no compression option.
option, the encoding scheme initially pairs the consecutive When none of the above given options provides any data
preprocessed samples of the J-sample block, and then compression on a block, this final option is selected. Under
transforms the sample pairs into new values that are coded the no compression option. The preprocessed block of data
with an FS codeword. The FS codeword is represented for is transmitted without any modification other than a
, where: prefixed identifier.
= (i + i+1) (i + i+1+1)/2 + i+1
The entropy coder chooses the option that needs
The third option is the Fundamental Sequence the fewest bits to encode the current block of symbols. An
code. This is also called the comma code. Here, the identifier bit sequence makes reference to the option
codeword is comprised of a string of 0 digits which is selected. When the quantization is equal to or less than 8
equal to the decimal of the symbol which has to be coded. bits, a 3 bit Id will be the output, while the larger
At the end of each current codeword, the digit 1 is quantization will have a 4 bit ID.
applied to signal its termination. This simple protocol
permits one to decode the FS code words without going for The test data is split into blocks of fixed length,
lookup tables. and the variable length adaptive coding the following
specifications is applied to the test data. The test vectors are
The Fourth option is the split-sample options. In divided into several blocks containing J samples (8, 16, 32,
the entropy coder, most of the options will be split-sample or 64 samples per block), with a maximum of 32 bits per
options. The kth split-sample option takes a block of J sample.
preprocessed data samples, splits the k least significant bits
665
International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 1, January 2014)
The output format for the coded data of the first VI. CONCLUSION
block is ID, Reference value, J-1 FS data sample or default
value, and K split data for J-1 samples. And the remaining Rice Algorithm coding is a great way to compress
blocks are coded in the format ID, and K split sample test data. It comes with dual benefits in that, it reduces both
option for J samples or default value. For example, S298 the amount of test data required to be stored on the tester
MINTEST Vectors are divided into 8 sample blocks, with and the time taken to transfer the test data from the tester to
each sample containing 8 bits. the CUT. In this paper, we have discussed of how we have
Input Data i i Output applied our algorithm on different benchmark circuits, and
11101000 - 11101000 {(111), 11101000, have made comparison of our reults with existing test
01110100 139 10001011 10001011,10101000, compression techniques. By applying our technique, we
11001000 168 10101000 00001111,01010100, have achieved a significantly higher compression ratio.
11000000 15 00001111 00001001,11110101,
11101010 84 01010100 10111000} REFERENCES
11100101 9 00001001 [1] Lossless Data Compression. Report Concerning Space Data System
00001010 245 11110101 Standards,CCSDS 121.0-B-2. Blue Book. Issue 2. Washington, D.C.:
10111000 184 10111000 CCSDS, May 2012.

[2] Abhijit Jas, Jayabrata Ghosh-Dastidar, Mom-Eng Ng, and Nur A.


Touba, An Efficient Test Vector Compression Scheme Using
10100111 33 00100001 {(110),100001,100011, Selective Huffman Coding, in IEEE Trans. Comput.-Aided Des.
01110101 99 01100011 100101,111111, Integr. Circuits Syst., vol. 22, no. 6, pp.797806, Jun. 2003.
00000010 229 11100101 011010,111110, [3] Y. Zorian, E. J. Marinissen, and S. Dey, Testing embedded core based
10111111 191 10111111 110010,100111,1,01,00 system chips, in Proc. IEEE Int. Test Conf., 1998, pp. 130143.
11101100 90 01011010 01,001,01,01,001,1
10000001 126 01111110 [4] A. Chandra, and K. Chakravarty, Test Data Compression and
Decompression for System-On- a-chip using Golomb codes, VLSI
11011010 178 10110010 K=6 Test Symposium, pp. 113-120, 2000.
11000110 39 00100111
[5] C.V. Krishna and N. A. Touba, Reducing Test Data Volume Using
LFSR Reseeding with Seed Compression, in Proc. of the IEEE
International Test Conference (ITC), 2002.

Table 4: Output format of Rice Algorithm [6] F. F. Hsu, K. M. Butler, and J. H. Patel, A case study on the
implementation of Illinois scan architecture, in Proceedings of IEEE
International Test Conference, 2001,pp. 538-547.
V. EXPERIMENTAL RESULTS
[7] I. Hamzaoglu and J. H. Patel, Reducing test application time for full
The experimental results for the various ISCAS89 scan embedded cores, in Proceedings of International Symposium on
benchmark circuits test vectors are compressed and Fault-Tolerant Computing,1999, pp. 260-267.
presented in the following table. We have used the Mintest [8] B. Koenemann et al., A SmartBIST Variant with Guaranteed
[6] test datas.We achieved the highest compression Encoding, Proc. 10th Asian Test Symp. (ATS 01),IEEE CS Press,
percentage for the different benchmark circuits. The 2001, pp. 325-330.
comparison is also given in the table below.
[9] M. Ishida, D.S Ha and T. Yamaguchi, COMPACT: A Hybrid Method
Compression Efficiency (%) for Compressing Test Data, VLSI Test Symposium, pp. 62-69, 1998.

Cicuits Golomb Selective FDR RICE [10] M. Tehranipoor, M. Nourani, and K. Chakrabarty, Nine- coded
[11] Huffman [15] Algorithm compression technique for testing embedded cores in SoCs, IEEE
Trans.Very Large Scale Integr. Syst., vol. 13, no. 6, pp. 719731, Jun.
[12] 2005.
s9234 45 54 61 74.1622
s13207 80 30 88 92.01 [11] A. Chandra and K. Chakrabarty, System on a chip test data
s38417 28 45 65 91.95 compression and decompression architectures based on Golomb
codes, IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst., vol.
Table 5: Comparison of different compression schemes 20, no. 3, pp.355368, Mar. 2001
using MINTEST test data

666
International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 1, January 2014)
[12] X. Kavousianos, E. Kalligeros, and D. Nikolos, Optimal selective
Huffman coding for test-data compression, IEEE Trans.
Computers,vol. 56, no. 8, pp. 11461152, Aug. 2007.

[13]. M. Nourani and M. Tehranipour, RL-Huffman encoding for test


compression and power reduction in scan applications, ACM Trans.
Des. Autom. Electron. Syst., vol. 10, no. 1, pp. 91115, 2005.

[14] X. Kavousianos, E. Kalligeros, and D. Nikolos, Multilevel-Huffman


test-data compression for IP cores with multiple scan chains, IEEE
Trans. Very Large Scale Integr. (VLSI) Syst., vol. 16, no. 7, pp. 926
931,Jul. 2008.

[15] A. Chandra and K. Chakrabarty, Test data compression and test


resource partitioning for system-on-a-chip using frequency-
directed run-length (FDR) codes, IEEE Trans. Computers, vol. 52,
no. 8, pp. 10761088, Aug. 2003

[16] X. Kavousianos, E. Kalligeros, and D. Nikolos, Test data


compression based on variable-to-variable Huffman encoding with
codeword reusability, IEEE Trans. Comput.-Aided Des. Integr.
Circuits Syst.,vol. 27, no. 7, pp. 13331338, Jul. 2008.

[17] Kanad Basu, Prabhat Mishra, Test Data Compression Using


Efficient Bitmask and Dictionary Selection Methods, IEEE Trans.
Very Large Scale Integr. (VLSI) Syst., vol. 18, no.9, Sep.2010

[18] H. Hashempour, L. Schiano, and F. Lombardi, Error-resilient test


data compression using Tunstall codes, in Proc. IEEE Int. Symp.
Defect Fault Tolerance VLSI Syst., 2004, pp. 316323.

[19] L. Lingappan, S. Ravi, A. Raghunathan, N. K. Jha, and S. T.


Chakradhar, Test-volume reduction in systems-on-a-chip using
heterogeneous and multilevel compression techniques, IEEE Trans.
Comput.-Aided Des. Integr. Circuits Syst., vol. 25, no. 10, pp. 2193
2206, Oct. 2006.

[20] M. Knieser, F.Wolff, C. Papachristou, D.Weyer, and D. McIntyre, A


technique for high ratio LZW compression, in Proc. Des., Autom.,
Test Eur., 2003, p. 10116.

667

Vous aimerez peut-être aussi