Académique Documents
Professionnel Documents
Culture Documents
Buletinul tiinific
al
Universitii "Politehnica" din Timioara
Seria ELECTRONIC I TELECOMUNICAII
TRANSACTIONS ON ELECTRONICS AND COMMUNICATIONS
1.
vo1
A1
vi
R1
vi
D2
vo2
vi
Leq
vi
INTRODUCTION
A2
R3
C2
D1
C1
R2
R4
OSCILLATOR PRINCIPLE
v o1 = v i
R4
R2 +R4
Z
Z
1 + 2 v o 2 2
R
R
3
3
(1)
------------------------------------------------------------------------------------------------------------------------------
C2 (Z2)
vi
R1
A1
vo1
R3
A2
tg =
vo2
R4
Fig.2. Oscillator open loop
R4 Z2
1
Z2
1+ = vo2
(3)
vi
+
Z R R R
R2 + R4 R3
3
1
3
3
1+
R1 + Z1 Z2 Z2
Here, to carry out the oscillation condition, must be
taken the equality vi = vo2. So, after some simple
calculus one finds the characteristic equation:
R R R
Z1 Z 2 + 1 3 4 = 0
(4)
R2
1
1
and Z 2 =
jC 1
jC 2
the characteristic equation becomes:
( j)2 C1C 2 + R 2 = 0
R 1R 3 R 4
(9)
C1
R2
R 1R 2
R 3R 4
v o1m = v im 1 +
R
R 1R 2
, v o 2 m = v im 1 + 2
R 3R 4
R4
vR2=vR3
0 vi= vR4=vC1
Replacing here Z1 =
=45
(10)
vo2
vC2=vR1
or
vo1
R2
C1 C 2
=0
(5)
R 1R 3 R 4
Heaving always in the left part of this equation a real
expression, this mean the amplitude oscillation
condition is invariably fulfil. Thus, the circuit in Fig.1
is an oscillator. From equation (5) the oscillation
frequency may be written:
2
R2
C1 C 2 R 1 R 3 R 4
=
1
f=
2
a) R1=R3
0 vi= vR4=vC1
=60
vC2=vR1
(6)
vo2
or
R2
C1 C 2 R 1 R 3 R 4
fo =
vR2=vR3
R2
R4
b) R1=3R3
vo1
(7)
Fig.3. Voltage phase diagrams with C1=C2 and R2=R4.
vR2=vR3
0 vi= vR4=vC1
<45
vo2
o
vC2=vR1
vo1
a) R2>R4
vC2=vR1
vo1
b) R2<R4
Fig.4. Voltage phase diagrams with C1=C2 and R2R4.
)(
C2
simplified as:
Q
R i 0.5R pC
vo1
(12)
vi
A2
R1
vo3
vi
Leq
vi
A3
vo2
R
Using low-value resistors R, low-loss capacitors and
high-input resistance OAs a quality factor of
10002000 may be achieved at low frequency.
In fact, the simplest amplitude limitation may be
obtained by means of one or two diodes (with counterparallel connection) as shown in Fig.1. The
symmetrical amplitude limitation using two devices
assures better spectral purity. In accordance with [1] the
diode (diodes) gives, in parallel with the resonant
circuit Leq-C1, a dynamic equivalent resistance:
rdeq kQrdp
A1
R3
D2 D1
C1
R2
R4
(11)
5. EXPERIMENT RESULTS
C2
V+
TL08
vo1m=
0.7V
A1
68nF
vim=0.5V
VR1 1k,1%
BA243
D2
V+
TL082
A2
VR2
vim=0.5V
D1
C1
68nF
R3 1k,1%
vo2m=
1V
1k,1%
R4
1k,1%
collector junctions from A726 IC with thermostabilized regime as limiting diodes [1]. Thus, it may
obtain the relative frequency instability of 2.10-6/oC and
relative voltage-amplitude instability of 2.10-4/oC.
6. SIMULATION RESULTS
REFERENCES
7. CONCLUSIONS
The theory of a new sine LC oscillator, with
simulated inductance, using two operational amplifiers
is developed. Using R-C components for inductance
simulation, the oscillator becomes a RC one. The
resulting parallel resonant Leq-C circuit may provide a
I. I NTRODUCTION
NTERVAL estimation of a binomial proportion p is one of
the most basic and methodologically important problems in
practical statistics. A considerable literature exists on the topic
and manifold methods have been suggested for bracketing a
binomial proportion within a confidence interval.
Among those methods, the Clopper-Pearson Confidence
Interval (CPCI), originally introduced in 1934 by Clopper and
Pearson [4], is often referred as the exact procedure by some
authors for it derives from the binomial distribution without
resorting to any approximation to the binomial. However,
despite this exacteness, the CPCI is basically conservative
because of the discreteness of the binomial distribution: given
(0, 1), the coverage probability of the 100(1 )% CPCI
is above or equal to the nominal confidence level (1 ).
In this paper, it is proved that this coverage probability is
actually larger than or equal to 1/2 if the sample size is less
than ln(/2)/ ln(max(p, 1p)). This bound basically depends
on the proportion itself. Thence, as suggested by numerical
results presented in [2] and [3], the CPCI is not suitable
in practical situations when getting a coverage probability
close to the specified confidence level is more desirable than
guaranteeing a coverage probability above or equal to the said
confidence level.
i=k
(ii)
0
for p = 0,
1/n
1 /2 for 0 < p (/2)
,
1/n
1/n
1
for (/2)
< p < 1 (/2)
,
1/n
1 /2 for 1 (/2)
p < 1,
0
for p = 1.
0
for p = 0,
1 /2 for 0 < p < 1 (/2)1/n ,
1
for 1 (/2)1/n p (/2)1/n ,
1 /2 for (/2)1/n < p < 1,
0
for p = 1.
0.995
0.99
Coverage probabilities
0.985
0.98
0.975
0.97
0.965
0.96
0.955
0.95
The theoretical results above are thus mathematical evidences that, as suggested in [2] and [3], the CPCI is inaccurate
in the sense that [...]its actual coverage probability can be
much larger than 1 unless the sample size n is quite large.
( [3, Sec. 4.2.1] ).
Figures 1, 2 and 3 illustrate the foregoing by displaying the
coverage probability of the 95% CPCI for p = 0.1, p = 0.05
and p = 0.01 respectively and sample sizes ranging from 1 to
500. In each figure, the value of ln(/2)/ ln(max(p, 1 p))
is represented by a vertical red line. On the left hand side of
this line, coverage probabilities are all larger than or equal
to 97.5%; on the right hand side of this same line, coverage
probabilities can be less than 97.5% and even close to 95%.
50
100
150
200
250
300
350
400
ln(/2)/ln(max(p,1p)) = 35.0120
450
500
Sample sizes
Fig. 1.
1
0.995
0.99
Coverage probabilities
0.985
0.98
0.975
0.97
0.965
0.96
0.955
0.95
50
100
150
ln(/2)/log(max(p,1p)) = 71.9174
Fig. 2.
200
250
300
350
400
450
500
Sample sizes
and
Fuk (k) = /2 for every k {0, . . . , n 1}.
(5)
(6)
0.995
0.99
Coverage probabilities
0.985
0.98
0.975
0.97
0.965
0.96
0.955
50
100
150
200
250
Sample sizes
Fig. 3.
300
350
400
450
500
ln(/2)/ln(max(p,1p)) = 367.0404
(7)
and
P ROOF:
Fp (L) FuL (L). According to (6), the right hand side in this
inequality equals /2 and statement (ii) follows.
Statement (iii) holds true since uX 1.
We now complete the proof of proposition 2.1. According
to lemma 3.2, statement (iii), `X < uX . Thereby, {p `X }
{p < uX } so that
P ({`X < p < uX })
10
I. INTRODUCTION
The wavelets constitute a powerful tool for
mathematical analysis which revolutionized the world
of signal processing. The wavelets are functions with
null average and finite energy. Their field of
application is very large: compression, segmentation,
denosing etc. The term denoising was introduced by
David Donoho, [2]. He treated the case of signals
perturbed by additive white Gaussian noise. The aim
of this paper is the denoising of additively perturbed
speech. Several speech denosing methods already
exist. Some of them are based on the use of the
wavelets transform, [1]. They differ by the type of
filter used in the wavelets domain but all follows the
Donohos algorithm. It is based on the following three
steps:
1. The computation of the wavelet transform, (WT),
2. The filtering of the obtained result,
3. The computation of the inverse wavelet transform,
(IWT).
Donoho used the DWT. Another WT, namelly the
diversity enhanced discrete wavelet transform,
DEDWT, was preffered in [1].
For filtering in the DEDWT domain we will use in the
following, MAP filters. Some of them will differ of
the bishrink filter, used in [1]. The knowledge of the
type of noise to be eliminated and of the probability
density function (pdf) of the useful signal are
essential for the second step of the denoising method.
The success of the denosing procedure depends on the
selection of the noise and useful signal pdfs. In the
following, we present a speech denoising method
which uses two types of MAP filters associations:
f X (x ) =
x
exp x
2()
(1)
1
N
i =1
xi
1. ENSIETA-Brest, France
11
Laplace
speech signal
with k 0 and
k =1
k =1
Z = Yk = k X k
variables Yk , k = 1, K ,
k = 1, K , then:
fZ ( z) = fY ( z) fY ( z ) fY ( z)
K
1
2
0.8
0.6
Yk , k = 1, K , represent
0.4
(5)
= 1, K ,
1
z
z
z
(6)
f X1 f X 2 f X K
0.2
0
-100
-80
-60
-40
-20
20
40
60
80
100
f X (x ) =
x
1
exp
2b
b
(3)
y
1
f X
k =1 k
k
K
fY ( y ) =
where
u
f
X
K
k1=1 hk
1
hk1
1
k1
K
u
fU ( u ) = Ph f X
hk
k1 =1
1
and Ph =
y
)
= 1 f (y)
X
( 7)
K 1
u
fX =
fU ( u) =
hk
k1=1 h
k1
1
fX (
(7)
f (x)
=
fY ( x ) = X
dy
dx
(8)
hk1
k1
(7)
y
1 K y
fU =
fU
gk K
k2=1 g
k2
2 gk k2=1 gk2
K
fY ( y) =
(4)
k2
12
fY ( y) = Pg fU
k2 =1 gk
2
and Pg =
(9)
g k2
k2
( 8) ,( 9)
fY ( y ) = Pg
y
f
k2 =1 k1 =1 hk1 gk2
K K
y
K
fY ( y ) = Pg ( Ph ) f X
Ph
(10)
fY ( y) = Pg ( Ph )
K +K 2
K K K
(11)
fX
hk gk 2hk
k =1k =1k =1
2
fY ( y) =P
k1=1k2=1
y
f
X
k+1=1 hk1hk2 hkN gkN+1
K
with P = Pg (Ph )
(12)
K + K 2 ++ K N
f X (x ) =
x
1
exp
2b
b
fY ( y ) = P
k1 =1 k 2 =1
F (y)
k + 1 =1
(13)
with:
1
1
y
exp
2b
b hk1 hk 2 hk N g k N +1
fY ( y ) = P
F2 ( y )
k1 =1 k 2 =1
k +1 =1
(14)
and:
F2 ( y) =
1
y
exp 2
2 2 hk1 hk2 hk N gk N +1
13
output pdf
Laplace
0.8
C. Filters associations
We have just shown that the starting conditions
concerning the distribution of the wavelet coefficients
cannot be kept unchanged throughout the entire
denoising process in the wavelets domain. This is why
we propose an association of filters: a first-one when
the pdf of the wavelet coefficients is supposed
Laplacian and a second association when this law
becomes Gaussian. In this section we were
particularly interested in the choice of the second
filter. If we consider y a noisy wavelet coefficient, w
the true coefficient and n the noise, [5], we can write
that:
y=w+n
(15)
0.6
0.4
0.2
0
-50
-40
-30
-20
-10
10
20
30
40
50
output pdf
Laplace
Gauss
w( y ) = arg max Pw y (w y )
w
Using Bayes rule, one gets:
0.8
0.6
(17)
0.4
w
Equation (17) is equivalent to:
0.2
0
-40
(16)
-30
-20
-10
10
20
30
40
(18)
Pn (n ) =
1 n
exp
2 n
n 2
(19)
0.8
and
0.6
Pw (w) =
0.4
(20)
0.2
0
-40
1 w 2
1
exp
2
2
-30
-20
-10
10
20
30
40
14
ln Pn ( y w ) + ln Pw ( w ) =
1 ( y w) 2
ln
2
2n2
n
w( y w) 22n w = 0
(21)
or:
sgn(w)( y w) 2 2n = 0
1 ( w)
+ ln
2
2 2
2
= y 2 n
d ln Pn ( y w ) + ln Pw ( w )
dw
=0
( y w ) 2 (w )
2
22n
or:
2 2
=0
+ n
If we define:
(g )+ = g0 ,,
if g > 0
otherwise ,
w = sign ( y ) y
(24)
+
2
(27)
ln Pn ( y w) + ln Pw (w) =
2n
y> 2
, the solution of the MAP
n2
(
)
(
)
=
sign
y
y
2
sign
y
(26)
w
1 ( y w) 2
1
+ ln
ln
2
2
2n
n 2
Therefore, if
[ n]
w
1
Pw (w) =
exp 2
2
2
= y+ 2 n
2
If y < 2 n
2
(23)
w= y 2 2
+ n
2
2 n then
2
= y 2 n
w 2n + 2 + 2 y = 0
h [ n] =
2
= y+ 2 n
(22)
(25)
2w
15
Pw (w) =
3
2
2
+
exp
w
w
1
2
22
2
2
y1 + y22 3n
+ y
1 =
w
1
y12 + y22
becomes:
w( y ) =
( y w )2 ( y w )2
arg max 1 21 2 2 2 + f (w)
2 n
2 n
w
(29)
This is equivalent to solve the system composed by
the following two equations, if f (w) is assumed to
be strictly convex and differentiable:
y1 w1
f
(30)
+
1 w = 0
n2
and:
y2 w 2
f
(31)
+
2 w = 0
2n
where f 1 and f 2 represent the derivative of f (w)
with respect to w1 and w2 , respectively. Taking into
account the relation (28), it can be written:
3
3
f (w) = log
w12 + w22 (32)
2
2
and the solutions of the system already mentioned are:
3w1
w12
+ w22
(33)
and:
f 2 (w) =
3 w2
w12 + w22
EXPERIMENTAL RESULTS
(28)
f1(w) =
(35)
(34)
16
REFERENCES:
15
output
SNR
(dB)
[1]
10
[2]
RSB map
RSB map wiener
RSB bishrink f
RSB bishrink wiener
[3]
[4]
[5]
-5
-5
10
15
V. CONCLUSION
In this article, we proposed new methods of speech
enhancement. At the beginning, on the basis of the
statistical properties of the speech signals and those of
the DWT coefficients, we showed that it is necessary
to combine different MAP filters at different DEDWT
decomposition levels for an efficient denoising.
Combining different types of filters, deduced in this
paper, we have built two MAP filters associations. To
follow, we have tested, by simulation, these
associations. The results obtained by comparing the
output SNR, are better than those of the Bishrink filter
and those of the Bishrink_f filter, used in [1]. So, the
method proposed is better than the noise spectral
subtraction method or the pure statistical denoising
method, that has inferior SNR improvements, versus
the association DEDWT-Bishrink_f, like is proved in
[1]. Moreover, these results show that the association
that does not take into account the inter-scale
dependency of the DWT coefficients is the best for
denoising speech signals. The method proposed is
efficient especially for low input SNR signals (see
figure 6). This is the reason why it can be used in
applications where the quality of speech is very bad.
For example, the proposed denoising method can be
applied for the enhancement of the robustness of a
Voice Activity Detector. When the SNR of the speech
is high enough, the distortion introduced by the
proposed denoising method is important, being better
to avoid its utilization in applications where a high
quality of the speech signal is required.
17
1
Facultatea de Electronic i Telecomunicaii, Centru de Cercetari pentru
Prelucrarea si Securizarea Datelor, Calea Dorobantilor 71-73, Cluj Napoca
e-mail Iulian.Voicu@com.utcluj.ro
2
Facultatea de Electronic i Telecomunicaii, Centru de Cercetari pentru
Prelucrarea si Securizarea Datelor, Calea Dorobantilor 71-73, Cluj Napoca
e-mail Monica.Borda@com.utcluj.ro
18
19
20
Figure 5. Baboon: from left to right: first row: original image; image compressed using DWT, at: 0.5bpp; 1bpp;
1.5bpp second row: original image; image compressed using DTCWT, at: 0.5bpp; 1bpp; 1.5bpp
Figure 6. Peppers: from left to right: first row: original image; image compressed using DWT, at: 0.5bpp; 1bpp;
1.5bpp second row: original image; image compressed using DTCWT, at: 0.5bpp; 1bpp; 1.5bpp
procedure. As future works we will focus on
improving even these results by considering this
procedure. Figures 7 and 8 presents for the same
baboon and peppers images the R, G, B color
planes of compressed images at this bit rate.
As we have written before, the second goal was
to verify if different pair of filters influences the final
results.
In this direction we tested 5/3 and 9/7 pairs of
linear phase filters for the DWT, while for the
DTCWT we used in the first level the same 5/3 and
9/7 and for the following levels q-shift filters
designed by Kingsbury [8].
We found that the differences are not
significant, for every option finding the same
critical bit rate, or slightly around, where DTCWT
start to perform better than DWT.
21
Figure 7. Baboon: from left to right: first row: original image and original R, G, B color planes. Second row:
compressed image and compressed R, G, B color planes at 1.5bpp using DWT; Third row: compressed image
and compressed R, G, B color planes at 1.5bpp using DTCWT
Figure 8. Peppers: from left to right: first row: original image and original R, G, B color planes; second row:
compressed image and compressed R, G, B color planes at 1.5bpp using DWT; third row: compressed image and
compressed R, G, B color planes at 1.5bpp using DTCWT
22
IV. CONCLUSIONS
REFERENCES
[1] G. Strang, T.Q. Nguyen, Wavelets and Filter Banks,
Wesley-Cambridge Press, 1996
[2] P. Vaidyanathan, "Multirate Signal Preocessing, Prentice
Hall, 1993
[3] K. Varma, Amy Bell, JPEG2000 Choices and Tradeoffs
for Encoders, IEEE Processing Magazine, Nov. 2004
[4] A. Skodras, C. Christopoulos and T. Ebrahimi, The
JPEG2000 Still Image Compression Standard, IEEE
Processing Magazine, September 2001
[5] Bryan E. Usevitch, A Tutorial on Modern Lossy Wavelet
Image Compression: Foundations of JPEG2000, IEEE Signal
Processing Magazine, September 2001
[6] N.G. Kingsbury, Image Processing with Complex
Wavelets, Phil. Trans. Royal Society London A, no. 357, pp.
2543-2560, Sept. 1999
[7] N.G. Kingsbury, Complex Wavelets for Shift Invariant
Analysis and Filtering of Signals, Journal of Applied and
Computational Harmonic Analysis, May 2001
[8] N.G. Kingsbury, Design of Q-Shift Complex Wavelets for
Image Processing Using Frequency Domain Energy
Minimization, Proc. IEEE Conf. on Image Processing,
Barcelona, Sept 15-17, 2003
[9] I.W. Selesnick, The Design of Approximate Hilbert
Transform Pairs of Wavelet Bases, IEEE Transactions on Signal
Processing, vol. 50, no. 5, May 2002
[10] I.Voicu, M.Borda, New Method of Filters Design for Dual
Tree Complex Wavelet Transform, Proceedings ISSCS, Iasi,
Romania, vol. 2, July 2005
[11] I.Voicu, M.Borda, Complex Wavelet Transform in
Compression Field, Proceedings EUROCON, Belgrade, Serbia &
Montenegro, November 2005
[12] J.M. Shapiro, Embedded imaging coding using zerotrees of
wavelet coefficients, IEEE Trans. Signal Processing, vol. 41, pp.
3445-3462, December 1993.
[13] J.Neumann, G. Steidl, Dual-Tree Wavelet Transform in the
Frequency Domain and an Application to Signal Classification,
Dept. of Mathematics and Computer Science, September, 2003
[14] A. Isar, I. Nafornita, Reprezentari timp-frecventa, Ed.
Politehnica, Timisoara 1998.
V. FUTURE WORK
Future work will be focused on finding reliable
methods for applying DTCWT in JPEG 2000
standard.
We think that complex wavelet transforms with
improved directionality can perform better than
classical discrete wavelet transform in compression
field.
Of course, redundancy is not good for compression
but we hope to find a link between the, in
coefficients of both trees of DTCWT this case
coding only a part of the coefficients being an
acceptable solution. In this way the redundancy can
be removed and we would exploit only the
advantage of improved directionality.
VI. ACKNOWLEDGEMENT
The first author was supported by a grant from the
National Council of Scientific Research and
Education (CNCSIS), Romania, code 245 BD.
23
I. INTRODUCTION
1
Politehnica University of Timisoara, Communications dept.
Bd. V. Parvan no. 2, 300223 Timisoara, e-mail miranda.nafornita@etc.upt.ro
2
Politehnica University of Timisoara, Communications dept.
24
25
26
VI. REMARKS
In recent years the main focus of transport
network evolution has been on increasing transport
capacities and introducing data networking
technologies and interfaces (e.g. Gigabit Ethernet).
This evolution is complemented by outgoing
initiatives to reduce the operational effort and
27
28
I. INTRODUCTION
DNA microarrays make the use of hybridization
properties of nucleic acids to monitor DNA or RNA
abundance on a genomic scale in different types of
cells. Hybridization process take place between
surface-bound DNA sequences, the probes, and the
DNA or RNA sequences in solution, the targets.
Hybridization is the process of combining
complementary, single-stranded nucleic acids into a
single molecule. Nucleotides will bind to their
complement under normal conditions, so two
perfectly complementary strands will bind to each
other readily. Conversely, due to the different
geometries of the nucleotides, a single inconsistency
between the two strands will prevent them from
binding.
In oligonucleotide microarrays hundreds of
thousands of oligonucleotides are synthesized in situ
by means of photochemical reaction and mask
technology. Probe design in these microarrays is
based upon complementarity to the selected gene. An
important component in designing an oligonucleotide
array is ensuring that each probe binds to its target
with high specificity.
The dynamics of the hybridization process
underlying genomic expression is complex as
thermodynamic factors influencing molecular
1
29
(1)
30
Tm = H o / (S o + R ln CT )
(2)
Where R is the general gas constant, i.e. 1.987 cal/K
mol, the CT is the total strand concentration, and Tm is
given in K. For non-self-complementary molecules,
CT in (2) was replaced by CT/4.
The observed trend in nearest-neighbor stabilities at
37 C [5] is GC/CG = CG/GC > GG/CC > CA/GT =
GT/CA = GA/CT = CT/GA > AA/TT > AT/TA >
TA/AT. This trend suggests that both sequence and
base composition are important determinants of DNA
duplex stability. It has long been recognized that
DNA stability depends of the percent G-C content.
(3)
(t ) =
x
( x + K )k f t
1 e
x+K
(4)
where K = kb k f .
I ( x, t ) = I 0 +
bx
( x + K ) k f t
1 e
x+K
(5)
bx
x+K
(6)
I0 + b
1
I0 + b
2
I(x)
I0
Fig. 1.
Kf
P + T C
Kb
31
(7)
y = a(1 e bx )
(9)
T
[1 exp( t )]
(8)
T +K
where K defined as in (4) is an equilibrium
1
denotes a
dissociation constant, and =
k f (T + K )
C (t ) =
[y a(1 e )]
R2 =
bxi
(10)
i = yi a(1 e bxi )
(11)
R2 =
2
i
(12)
(13)
(R 2 )
=
b
i
=
a
i i =
b
(1 e ) = 0
bxi
ax e
i
(14)
=0
bxi
ak +1 = ak a
bk +1 = bk b
(R 2 )
= ak + a
a
(R 2 )
= bk + b
b
(1 e )
(15)
(16)
bk xi
i ,k
i ,k
ak e bk xi
32
[8]
[9]
[10]
[11]
[12]
Fig. 3.
[13]
[14]
[15]
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
33
I. BASICS
"Patents reveal solutions to technical problems and
they represent an inexhaustible source of information:
more than 80 percent of man's technical knowledge is
described in patent literature." - Statement of the
European Patent Office [4].
Studying patterns in the way Altshuller, the "father"
of TRIZ did, about fifty years ago when he started his
career as a promising young inventor, means using a
certain frame of work different from the usual one,
based on classification.
Analyzing around 400,000 patents, Altshuller
recognized that systems do not evolve randomly, but
along demonstrable principles called patterns of
evolution which principles are general in nature and
can be applied to any system or product. He also
made the observation that many inventive technical
problems from various fields of engineering are
solved by the same generic approaches.
Specialists in TRIZ had already analyzed almost 2
million patents from all around the globe, which
represent 10% of the total number of issued patents.
The table 1 illustrates the classification system used
by Altshuller [1] when classifying and studying
world-wide patents.
The results of the study of patents must be put in a
certain pattern abstract form in order to produce
uniform results, so they contain reusable information.
TRIZ specialists proposed [4] a Patent Abstract Form
which mainly contains the following items:
A. Legal information
B. Abstract of extracted knowledge
C. State of the art
D. Elements and functions of the prototype system
E. Resolved conflict
e1. type of contradiction
e2. structure of problem
F. Invented system
f1. contradiction
1
Table 1
Level
1.
Standard
solution
2. Change
of a
system
3. Solution
across
industries
4. Solution
across
sciences
5.
Discovery
System
changing
Trade-off,
quantitative
changes
Qualitative
change
Variants
number
A few
Used
knowledge
Someone
profession
Inventions
pool
32%
Tens
One
industry
45%
Radically
changed
system
Hundreds
Many
industries
19%
New system
created
Thousands
, tens of
thousands
Hundreds
of
thousands,
millions
Many
sciences
4%
New
knowledge
created
0.3%
New
discovery
34
Table 2
Pattern
Technology
follows a life
cycle of birth,
growth,
maturity, and
decline.
Stage 1. A
system does not
yet exist, but
important
conditions for its
emergence are
being developed.
Stage 2. A new
system appears
due to high-level
invention, but
development is
slow.
Stage 3. Society
recognizes value
of the new
system.
Stage 4.
Resources for
original system
concept end.
Stage 5. Next
generation of
system emerges
to replace
original system.
Stage 6. Some
limited use of the
original system
may coexist with
the new system.
Increasing
Ideality.
Uneven
development
of subsystems
resulting in
contradictions.
Increasing
dynamism and
controllability.
Increasing
complexity,
followed by
simplicity
through
integration.
Example
1. Early
experiments like
"cat's whisker", the
crystal detector, etc.
Matching and
mismatching
of parts.
Transition
from macro
systems to
microsystems
using energy
fields to
achieve better
performance or
control.
Decreasing
human
involvement
with increasing
automation.
2. Late 40s - W.
Sheckley, J.
Bardeen, W.
Brattain, inventors
of the point-contact
transistor.
3. Companies invest
in producing the
new device and
replacing the
vacuum tubes where
possible. Financial
resources available.
4. New components
based on the new
concept but with
new materials and
design; need for
more complex
function and smaller
device.
5. J. Kilby and R.
Noyce - inventors
of integrated circuit.
6. Despite the
explosive
development of IC
industry, the
classical transistor
is still used in
certain electronic
circuits.
35
Case 1:
Multi-frequency light source - USPA 20060193032,
August 31, 2006
The text says: "a multi-frequency light producing
method and apparatus multiplies the number of
optical channels present in an incident wavelength
division multiplexed (WDM) signal light source by
four-wave mixing (FWM) the WDM signal with at
least one pump lightwave at least one time." Further
on in text contradictions to be solved are set: "For a
system employing a large number of wavelengths,
using a different light source such as a laser diode to
generate each optical wavelength can be expensive
and consume a large amount of electric power." then
effects are used in solving contradictions: "Four-wave
mixing (FWM) is a phenomenon wherein three optical
lightwaves of different frequencies (wavelengths),
propagating in a nonlinear medium interact with one
another due to the nonlinear optical effect of the
conversion medium. This interaction generates
additional optical signal having different frequencies
from the three original signals." Using Physical and
Technical Effects helps inventors to obtain high level
even revolutionary solutions on the path to ideality
[2].
S2
S1
Fig. 1
Case 2:
Auxiliary power conservation for telecommunications
site - USPA 20060182262, August 17, 2006
The abstract says: "a method and/or system is
provided
for
managing
power
in
a
telecommunications site having an auxiliary power
supply that selectively powers the site when an outage
is experienced in connection with a primary power
supply for the site. The method includes: detecting a
power outage to the primary power supply;
determining the duration for the detected outage;
selecting a power conservation protocol based on the
determined duration of the outage; switching to the
auxiliary power supply and implementing the selected
power conservation protocol." Further on in the text
36
III. CONCLUSIONS
The study of patents is of fundamental importance for
TRIZ and not only, because patents represent the
primary source of knowledge and the methodic study
of this fund can bring up new heuristics. Due to the
fact that this kind of study is very difficult and not
very productive (one new heuristic per 10,000 patents
[4]) new methods of study, perhaps based on artificial
intelligence are needed.
Case 3:
Fire-resistant power and/or telecommunication cable USPA 20060148939, July 6, 2006.
In the abstract it is stated that: "a cable capable of
withstanding extreme temperature conditions...remain
in operation during a defined length of time on being
subjected to high levels of heat and/or directly to
flames." The Administrative Contradiction is
established: "is essential to maximize the ability of a
cable to retard the propagation of flames, and also to
withstand fire." After stating that: "a cable is
constituted in general terms by at least one conductor
part extending inside at least one insulator part" the
physical contradiction appears: "amongst the best
insulating materials and/or protection materials used
in cable-making, many of them are unfortunately also
highly flammable."
The state of the art of the technique is presented
further on: " The technique that has been in the most
widespread use until now consists of implementing
halogen compounds in the form of a halogen byproduct dispersed in a polymer matrix." Safety
constraints are also present: " present regulations are
tending to ban the use of substances of that type,
essentially because of their toxicity and because of
REFERENCES
[1] Alshuller, Genrich. The Innovation Algorithm. Technical
Innovation Center, Inc.Worcester, MA, 2000
[2] Altshuller, Genrich. Creativity as an Exact Science. The Theory
of the Solution of Inventive Problems. Studies in Cybernetics: 5.
Brunel University, Gordon and Breach, Science Publishers,
Inc.1984
[3] Salamatov, Yuri. TRIZ: The Right Solution at the Right Time. A
Guide to Innovative Problem Solving. Insytec B.V., 1999
[4] Savransky, Semyon D. Engineering of Creativity. Introduction
to TRIZ Methodology of Inventive Problem Solving. CRC Press,
2000
[5] www.uspto.gov
37
Studied Patent
Yes
New TS ?
New-TS Patent
analysis
Contradiction
Patent Analysis
No
Yes
Solved
Contradiction ?
No
Effects
Procedure
Effects&Evolution
Procedures
Evolution
Procedure
Yes
Known trends of
TS evolution ?
What ?
No
Yes
Unknown effect
used in solution ?
Solution more
effective than
previous ?
Describe
No
Known effect
Describe:
Natural
Phenomenon
or
Technical
Effects
Correct
decision ?
Common
usage ?
No
Waste deposit
Yes
No
No
Yes
How ?
No
New line of TS
evolution ?
Yes
No
Waste deposit
No
TRIZ Data Base
START OVER
Fig. 2
38
Yes
Describe
I.
INTRODUCTION
39
40
41
for (j=1;j<=n;j++)
{ xor=0;
for (i=0;i<8;i++)
{ if ((ax[i]==1)&&
((ind-1)!=i))
xor=xor^sr[i][j-1];
if (i!=0)
sr[i][j]=sr[i-1][j-1];
}
sr[ind][j]=col[ind][j-1]
^sr[ind-1][j-1];
sr[0][j]=xor^sr[ind][j];
}
for (i=0;i<8;i++)
rez[i][ind]=sr[i][n];
}
In this procedure, the result was calculated by
translating the elements corresponding to the
SRis, with i from 1 to 7 without ind (the
argument of the procedure), by connecting the
feedback in the SR0 while executing XOR with
all the SRis that have a corresponding
connection (the power i exists in the chosen
polynomial) and with the current corresponding
SRind, and by executing XOR with the element
of the corresponding column and the element of
the corresponding SRind.
The difference between prel_SN and prel_SA is
given by the SRind: it is taken into consideration
in the procedure prel_SA in SR0, and does not
appear in the procedure prel_SN in SR0.
The final result is calculated by making XORs
with all the partial results obtained in the 8 steps
on each and every column of all the eight
columns.
IV. CONCLUSIONS
Analyzing the experiments already presented it
can be observed that the simulation speed for the
MISR is higher versus the simulation speed for
the LFSR.
The mathematical expressions of generating
polynomial weights, reported in this paper, were
very useful for the simulation of the
corresponding LFSR and MISR.
42
V. REFERENCES
[1]B. Schneier, Applied Cryptography: Protocols,
Algorithms, and Source Code in C, John Wiley and Sons,
New York, 1996.
[2] H. Niederreiter, A PublicKey Cryptosystem Based on
Shift Register Sequences, Proceedings of EUROCRYPT85,
Linz, Austria 1985.
43
I. INTRODUCTION
The page format is A4. The articles must be of 6
pages or less, tables and figures included.
II. GUIDELINES
The paper should be sent in this standard form. Use a
good quality printer, and print on a single face of the
sheet. Use a double column format with 0.5 cm in
between columns, on an A4, portrait oriented,
standard size. The top and bottom margins should be
of 2.28 cm, and the left and right margins of 2.54 cm.
Microsoft Word for Windows is recommended as a
text editor. Choose Times New Roman fonts, and
single spaced lines. Font sizes should be: 18 pt bold
for the paper title, 12 pt for the author(s), 9 pt bold for
the abstract and keywords, 10 pt capitals for the
section titles, 10 pt italic for the subsection titles;
distance between section numbers and titles should be
of 0.25 cm; use 10 pt for the normal text, 8 pt for
affiliation, footnotes, figure captions, and references.
III. FIGURES AND TABLES
Figures should be centered, and tables should be left
aligned, and should be placed after the first reference
in the text. Use abbreviations such as Fig.1. even at
the beginning of the sentence. Leave an empty line
before and after equations. Equation numbering
should be simple: (1), (2), (3) and right aligned:
x(t ) =
a
a
y ( t )d .
(16)
Value
2.4
10.0
Unit
A
V
V. REMARKS
A. Abbreviations and acronyms
Abbreviations and acronyms should be explained
when they appear for the first time in the text.