Vous êtes sur la page 1sur 9

1710

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 57, NO. 5, MAY 2010

Seeker Optimization Algorithm for


Digital IIR Filter Design
Chaohua Dai, Weirong Chen, Member, IEEE, and Yunfang Zhu

AbstractSince the error surface of digital infinite-impulseresponse (IIR) filters is generally nonlinear and multimodal, global
optimization techniques are required in order to avoid local minima. In this paper, a seeker-optimization-algorithm (SOA)-based
evolutionary method is proposed for digital IIR filter design. SOA
is based on the concept of simulating the act of human searching
in which the search direction is based on the empirical gradient
by evaluating the response to the position changes and the step
length is based on uncertainty reasoning by using a simple fuzzy
rule. The algorithms performance is studied with comparison of
three versions of differential evolution algorithms, four versions
of particle swarm optimization algorithms, and genetic algorithm.
The simulation results show that SOA is superior or comparable
to the other algorithms for the employed examples and can be
efficiently used for IIR filter design.
Index TermsDigital infinite-impulse-response (IIR) filter design, global optimization, heuristics, seeker optimization algorithm (SOA), system identification.

I. I NTRODUCTION

ITH the advantages of digital signal processing (DSP)


over an analog system [1], [2], digital filters have attracted increasing attention in recent years due to demands for
improved performance in high-data-rate digital communication
systems and in wideband image/video processing systems. Digital infinite-impulse-response (IIR) filters can often provide a
much better performance and less computational cost than their
equivalent finite-impulse-response (FIR) filters [3] and have
become the target of growing interest [1], [2], [4]. However,
because the error surface of IIR filters is usually nonlinear and
multimodal, conventional gradient-based design methods may
easily get stuck in the local minima of error surface [4], [5].
Therefore, some researchers have attempted to develop design
methods based on modern heuristic optimization algorithms
such as genetic algorithm (GA) [6][9], simulated annealing (SA) [10], tabu search (TS) [5], ant colony optimization
(ACO) [1], immune algorithm (IA) [2], differential evolution
(DE) [11], [12], and particle swarm optimization (PSO) [13],
[14], etc.

Manuscript received March 31, 2008; revised August 19, 2009. First published September 1, 2009; current version published April 14, 2010. This
work was supported in part by the National Natural Science Foundation of
China under Contract 60870004 and in part by the Doctor Student Innovation
Foundation of Southwest Jiaotong University.
C. Dai and W. Chen are with the School of Electrical Engineering, Southwest
Jiaotong University, Chengdu 610031, China (e-mail: dchzyf@yahoo.com.cn).
Y. Zhu is with the Department of Computer and Communication Engineering, Southwest Jiaotong University, Emei 614202, China.
Digital Object Identifier 10.1109/TIE.2009.2031194

A GA usually discovers the promising regions of search


space very quickly; however, it often has two drawbacks:
premature convergence and lack of good local search ability
[2], [12]. Although SA is easy to be implemented and good at
local convergence, depending on the initial solution, it might
often require too many cost function evaluations to converge to
the global minima [5]. Similar to SA, TS is usually sensitive
to the starting point of the search [2]. Moreover, to our best
knowledge, TS often needs to spend large amounts of memory
to store the information about the past steps of search when
it deals with very complex practical problems. Up to now, IA
has not attracted the same kind of interest from researchers
as evolutionary computation (EC) [2]. Moreover, the area is
lacking the proposal and development of a general framework
to design IA systems [15]. Because the IA in [2] used a
K-tournament selection scheme, a one-point mutation operator,
and greedy reproduction on a binary-coded population, it can
be viewed as an EC and could not easily escape from the
limitations of EC: premature convergence, search stagnation,
among others [16]. ACO models the social behavior of real ant
colonies and has been originally developed for combinatorial
optimization problems [17], [18]. Some studies have shown that
ACO also faces with premature convergence [1] and search
stagnation [17]. Although it has been proved that DE and
PSO both have several advantages for digital IIR filter design
[11][14], it does not mean that there are no any limitations
to them. Several studies have shown that DE is prone to not
only premature convergence but also stagnation [19] and that a
successful location of the global optimum depends on choosing
the correct control parameters [20]. In addition, there is yet
very limited theoretical understanding of how it works and
why it performs well [21]. Meanwhile, the performance of PSO
also depends on its parameters [22] and may be influenced by
premature convergence and stagnation problem [4], [23].
The seeker optimization algorithm (SOA), originally proposed in [24], is a new population-based heuristic search algorithm, which attempts to simulate the act of human searching
for real-parameter optimization. In this paper, an SOA-based
evolutionary method is proposed for digital IIR filter design,
and its performance is compared to that of three versions of
DE, four versions of PSO, and GA. The simulation results show
that SOA has better or comparable performance than the other
algorithms, at least, for the employed examples.
The organization of this paper is as follows. Section II
presents the design problem. In Section III, SOA is described.
Then, in Section IV, SOA is applied to the design of the IIR
filter. Finally, conclusion is drawn in Section V.

0278-0046/$26.00 2010 IEEE

DAI et al.: SEEKER OPTIMIZATION ALGORITHM FOR DIGITAL IIR FILTER DESIGN

Fig. 1.

1711

Block diagram of the system-identification process using an IIR filter.

II. D ESCRIPTION OF THE P ROBLEM


Consider the digital IIR filter with the inputoutput relationship governed by the following difference equation:
y(k) +

M


bi y(k i) =

i=1

L


ai x(k i)

(1)

the population size. The individual of this population is called


seeker. Assume that the optimization problems to be solved are
minimization problems.

i=0

where x(k) and y(k) are the filters input and output, respectively; M ( L) is the filter order; and bi and ai are the filter
coefficients.
The application of IIR filters in system identification has
been widely studied since many problems encountered in signal
processing can be characterized as a system-identification problem (Fig. 1) [1][3], [5], [10], [12][14]. Hence, in this paper,
IIR filters are designed for the system-identification purpose.
In this case, the parameters of the IIR filter are successively
adjusted by SOA until the error between the outputs of the
filter and the system is minimized. In other words, the task is
formulated as a minimizationoptimization problem with the
mean square error (mse) as the objective function J(w)
min J(w) = min

wW

Fig. 2. Main steps of SOA.

wW

N
1 
(d(k) y(k))2
N

A. Implementation of SOA
The main steps of SOA are shown in Fig. 2. In order to add a
social component for social sharing of information, a neighborhood is defined for each seeker. In the present simulations, the
population is randomly divided into three subpopulations (all
the subpopulations have the same size), and all the seekers in
the same subpopulation constitute a neighborhood. A search di

rection d i (t) = [di1 , . . . , diD ] and a step-length vector i (t) =


[i1 , . . . , iD ] are computed (see Sections III-B and C)
for the ith seeker at time step t, where ij (t) 0 and dij (t)
{1, 0, 1}, i = 1, 2, . . . , s; j = 1, 2, . . . , D. Then, the jth element of the ith seekers position is updated by
xij (t + 1) = xij (t) + ij (t)dij (t).

(3)

(2)

k=1

where w = [ a0 a1 aL b1 b2 bM ]T denotes
the filter coefficient vector; W represents the coefficient space;
d(k) and y(k) are the desired and actual responses of the filter,
respectively; and N is the number of samples used to calculate
the objective function.
Not all filters defined by (1) are feasible or implementable.
A causal linear time-invariant system is stable if and only
if its all poles lie inside the unit circle. Another efficient
way of maintaining stability is to convert the direct form to
a lattice form and make sure that all reflection coefficients
ki , 0 i M 1, have magnitudes that are less than one
[10]. We adopt the latter in this paper as [10] did. Thus, the
actual filter coefficient vector, namely, the real-coded solution
of SOA, is w = [ a0 a1 aL k0 k1 kM 1 ]T .
In this case, the coefficient space W is formed by the constraints
of ai [2, 2] (i = 0, . . . , L) and the magnitudes of ki (i =
0, . . . , M 1) that are less than one for the simulation study in
this paper.
III. SOA
SOA operates on a search population of s D-dimensional
position vectors, which encode the potential solutions to the op
timization problem at hand, i.e., x i = [xi1 , . . . , xij , . . . , xiD ],

i = 1, 2, . . . , s, where xij is the jth element of x i and s is

Since the subpopulations are searching using their own information, they are easy to converge to a local optimum. To
avoid this situation, an intersubpopulation learning strategy
is used, i.e., the worst two positions of each subpopulation
are combined with the best position of each of the other two
subpopulations by the following binomial crossover operator:

xlj,best ,
if Rj 0.5
xkn j,worst =
(4)
xkn j,worst , else
where Rj is a uniformly random real number within [0, 1],
xkn j,worst is denoted as the jth element of the nth worst
position in the kth subpopulation, xlj,best is the jth element
of the best position in the lth subpopulation, the indices
k, n, and l are constrained by the combination (k, n, l)
{(1, 1, 2), (1, 2, 3), (2, 1, 1), (2, 2, 3), (3, 1, 1), (3, 2, 2)}, and
j = 1, . . . , D. In this way, the good information obtained by
each subpopulation is exchanged among the subpopulations,
and then, the diversity of the population is increased.
B. Search Direction
The gradient has played an important role in the history of
search methods [25]. The search space may be viewed as a
gradient field, and a so-called empirical gradient (EG) can be
determined by evaluating the response to the position change,
particularly when the objective function is not available in a
differentiable form at all. Then, the seeker can follow an EG

1712

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 57, NO. 5, MAY 2010

to guide his search. Since SOA does not involve the magnitude
of the EG, the search direction can be determined only by the
signum function of a better position minus a worse position.




For example, an empirical search direction d = sgn( x x )




when x is better than x , where sgn() is a signum function on
each element of the input vector. In SOA, every seeker selects
his search direction based on several EGs by evaluating the
current or historical positions of himself or his neighbors. They
are detailed as follows.
Swarms are a class of entities found in nature, which specialize in mutual cooperation among them in executing their
routine needs and roles [26]. There are two extreme types
of cooperative behavior. One, namely, egotistic, is entirely
proself, and another, namely, altruistic, is entirely progroup
[27]. Each seeker i, as a single sophisticated agent, is uniformly
egotistic, believing that he should go toward his historical

best position p i,best (t) through cognitive learning [28]. Then,


an EG from x i (t) to p i,best (t) can be involved for the ith
seeker at time step t. Hence, each seeker i is associated with
an empirical direction called egotistic direction di,ego (t) =
[di1,ego , di2,ego , . . . , diD,ego ]




d i,ego (t) = sgn p i,best (t) x i (t) .
(5)
On the other hand, based on the altruistic behavior, the seekers in the same neighborhood region cooperate explicitly, communicate with each other, and adjust their behaviors in response
to others to achieve the desired goal [27]. That means that the
seekers exhibit an entirely progroup behavior through social
learning [28]. The population then exhibits a self-organized
aggregation behavior, whose positive feedback usually takes
the form of attraction toward a given signal source [29]. For a
black-box problem in which the ideal global minimum value

is unknown, the neighbors historical best position g best (t)


or the neighbors current best position l best (t) is used as


the attraction signal source of the self-organized aggregation
behavior. Hence, each seeker i is associated with two optional


altruistic directions, i.e., d i,alt1 (t) and d i,alt2 (t)






d i,alt1 (t) = sgn g best (t) x i (t)





d i,alt2 (t) = sgn l best (t) x i (t) .

(6)
(7)

Moreover, seekers enjoy the properties of proactiveness.


Seekers do not simply act in response to their environment; they
are able to exhibit a goal-directed behavior [30]. In addition,
future behavior can be predicted and guided by past behavior
[31]. As a result, the seeker may be proactive to change his
search direction and exhibit a goal-directed behavior according
to his past behavior. Hence, each seeker i is associated with an


empirical direction called proactiveness direction d i,pro (t)






d i,pro (t) = sgn x i (t1 ) x i (t2 )
(8)
where t1 , t2 {t, t 1, t 2} and x i (t1 ) is better than x i (t2 ).


According to human rational judgment, the actual search




direction of the ith seeker, i.e., d i (t) = [di1 , di2 , . . . , diD ], is


based on a compromise among the aforementioned four em

pirical directions, namely, d i,ego (t), d i,alt1 (t), d i,alt2 (t), and


d i,pro (t). In this paper, the jth element of d i (t) is selected by


applying the following proportional selection rule:

(0)
if rj pj

0,
(0)
(0)
(1)
dij = 1,
(9)
if pj < rj pj + pj

(0)
(1)
1, if pj + pj < rj 1

where i = 1, 2, . . . , s, j = 1, 2, . . . , D, rj is a uniform random


(m)
number in [0, 1], and pj
(m {0, 1, 1}) is defined as
follows: In the set {dij,ego , dij,alt1 , dij,alt2 , dij,pro }, which is


composed of the jth elements of d i,ego (t), d i,alt1 (t), d i,alt2 (t),


and d i,pro (t), let num(1) be the number of 1, num(1)


be the number of 1, and num(0) be the number of 0,
(1)
(1)
(0)
= num(1) /4, and pj =
and then, pj = num(1) /4, pj
num(0) /4.
C. Step Length
In the continuous space, there often exists a neighborhood
region that is close to an extremum point. In this region, the
fitness values of the input variables are proportional to their
distances from the extremum point. It may be assumed that
better points are likely to be found in the neighborhood of
families of good points, and the search should be intensified
in regions containing good solutions through focusing search
[32]. Hence, from the standpoint of human searching, it could
be believed that one may find near-optimal solutions in a
narrower neighborhood of the point with lower fitness value
and, contrariwise, in a wider neighborhood of the point with
higher fitness value.
Fuzzy systems arose from the desire to describe complex systems with linguistic descriptions [33]. According to human focusing searching discussed earlier, the uncertainty reasoning of
human searching could be described by natural linguistic variables and a simple control rule as If {f itness value is small}
(i.e., the conditional part), then {step length is short}
(i.e., the action part). The understanding and linguistic description of human searching makes the fuzzy system a
good candidate for simulating human focusing searching
behavior.
Different optimization problems often have different ranges
of fitness values. To design a fuzzy system to be applicable
to a wide range of optimization problems, the fitness values
of all the seekers are descendingly sorted and turned into
the sequence numbers from 1 to s as the inputs of fuzzy
reasoning. The linear membership function is used in the conditional part (fuzzification) since the universe of discourse is
a given set of numbers, i.e., {1, 2, . . . , s}. The expression is
presented as
i = max

s Ii
(max min )
s1

(10)

DAI et al.: SEEKER OPTIMIZATION ALGORITHM FOR DIGITAL IIR FILTER DESIGN

Fig. 3. Action part of fuzzy reasoning.




where Ii is the sequence number of x i (t) after sorting the


fitness values and max is the maximum membership degree
value that is equal to or a little less than 1.0. In this paper,
max = 0.95.
In the action part (defuzzification), the Bell membership
2
2
function (ij ) = eij /(2j ) (i = 1, . . . , s; j = 1, . . . , D) is
used for the jth element of the ith seekers step length. For
the Bell function, the membership degree values of the input
variables beyond [3j , 3j ] are less than 0.0111 ((3j ) =
0.0111), which can be neglected for a linguistic atom [34].
Thus, the minimum value min = 0.0111 is set. Moreover, the
parameter j of the Bell membership function is the jth element


of the vector = [1 , . . . , D ], which is given by




= abs( x best x rand )

(11)

where abs() returns an output vector such that each element


of the vector is the absolute value of the corresponding element
of the input vector. The parameter is used to decrease the
step length with increasing time step so as to gradually improve
the search precision. In this paper, is linearly decreased from


0.9 to 0.1 during a run. x best and x rand are the best seeker
and a randomly selected seeker in the same subpopulation to

which the ith seeker belongs, respectively. Notice that x rand


is different from x best , and is shared by all the seekers in


the same subpopulation. Then, the action part of the fuzzy
reasoning (shown in Fig. 3) gives the jth element of the ith

seekers step length i = [i1 , . . . , iD ] (i = 1, 2, . . . , s; j =
1, 2, . . . , D)

ij = j log (RAN D(i , 1))
(12)


where j is the jth element of vector in (11), the function


log() returns the natural logarithm of its input, and the function
RAN D(i , 1) returns a uniform random number within the
range of [i , 1], which is used to introduce the randomicity for

each element of i and improve local search capability.
D. Further Analysis of SOA
Unlike GA, SOA conducts focusing search by following the
promising empirical directions to converge to the optimum for
as few generations as possible. In this way, it does not easily get
lost and is able to locate the region in which the global optimum
exists.
As far as we know, PSO is not good at choosing step
length [35], while DE sometimes has a limited ability to
move its population by large distances across the search space
and would have to face with stagnation puzzledom [19].

1713

Unlike PSO and DE, SOA deals with search direction and
step length independently. Due to the use of fuzzy rule: If
{f itness value is small}, then {step length is short}, the
better the position of the seeker, the shorter his step length. As
a result, from the worst seeker to the best seeker, the search
is changed from a coarse one to a fine one so as to ensure
that the population can not only keep a good search precision
but also find new regions of the search space. Consequently,
at every time step, some seekers are better for exploration,
while some others are better for exploitation. In addition, due
to the self-organized aggregation behavior and the decreasing
parameter in (11), the feasible search range of the seekers
is decreasing with increasing time step. Hence, the population
favors exploration at the early stage and exploitation at
the late stage. In a word, both at every time step and in the
whole search process, SOA can effectively balance exploration
and exploitation, which could ensure the effectiveness and
efficiency of SOA [36].
According to [37], a nearer-is-better (NisB) property is almost always assumed: Most of the iterative stochastic optimization algorithms, if not all, at least from time to time, look around
a good point in order to find an even better one. Furthermore,
Clerc [37] also pointed out that an effective algorithm may
perfectly switch from a NisB assumption to a nearer-is-worse
(NisW) one and vice versa. We believe that SOA is potentially
provided with the NisB property because of the use of fuzzy
reasoning and can switch between a NisB assumption and a
NisW one. The main reason lies in the following two aspects.
On the one hand, the search direction of each seeker is based on
a compromise among several empirical directions, and different
seekers often learn from different empirical points on different
dimensions instead of a single good point, as mentioned by the
NisB assumption. On the other hand, the uncertainty reasoning
(fuzzy reasoning) used by SOA would let a seekers step length
uncertain, which uncertainly lets a seeker to become nearer
to a certain good point or farther away from another certain
good point. Both two aspects can boost the diversity of the
population. Hence, from Clercs point of view [37], it is further
proved that SOA is effective.
IV. S IMULATION R ESULTS
Since their proposal in 1995, PSO and DE have been receiving increasing interest from the EC community as two
of the relatively new and powerful population-based heuristic
algorithms, and they both have been successfully applied to
digital IIR filter design [11][14]. Thus, the proposed method
is compared mainly with DE, PSO, and their recently modified
versions: the original DE (DE/rand/1/bin, F = 0.5, CR = 0.9)
[38], DE with self-adapting control parameters (SACP-DE)
[38], self-adaptive DE (SaDE) [39], PSO with adaptive inertia
weight (PSO-w: learning rate c1 = c2 = 2, inertia weight linearly decreased from 0.9 to 0.4 with increasing run time, and
the maximum velocity vmax is set at 20% of the dynamic range
of the variable on each dimension) [40], PSO with constriction
factor (PSO-cf: c1 = c2 = 2.01 and constriction factor =
0.729844) [41], comprehensive learning particle swarm optimizer (its parameters follow the suggestions from [42], except

1714

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 57, NO. 5, MAY 2010

TABLE I
R ESULTS OF E ACH A LGORITHM . Best, M ean, AND Std S TAND FOR THE B EST MSE OVER 30 RUNS , THE M EAN B EST MSE , AND THE
S TANDARD D EVIATION , R ESPECTIVELY, AND THE h AND CI VALUES A RE THE R ESULTS OF T -T ESTS . T HE B OLD IN THE C OLUMNS OF
Best, M ean, AND Std A RE THE B EST VALUES A MONG A LL THE A LGORITHMS , THE B OLD IN THE C OLUMNS OF h AND CI
S HOWS T HAT SOA I S S IGNIFICANTLY S UPERIOR TO THE C OMPARED M ETHODS , AND THE I TALIC IN THE C OLUMNS OF
h AND CI S HOWS T HAT SOA I S S TATISTICALLY I NFERIOR TO THE C OMPARED M ETHODS

that the refreshing gap m = 2), standard PSO 2007 (SPSO2007) [43], and GA (bit-string encoding, 20 b for each variable,
roulette-wheel selection, elitist strategy, two-point crossover
probability pc = 0.8, and mutation probability pm = 0.05) [7].
In all the experiments, the same population size popsize = 100,
except SPSO-2007 whose popsize is automatically computed
by the algorithm, a total of 30 runs, and a total number of
maximum generations of 1000 are made.
Simulation studies are carried out on seven typical examples
(see Appendix I), which are taken from [1], [2], [5], [7], [10],
and [12][14]. The best (Best), the mean (M ean), and the
standard deviation (Std) of the objective function values J(w)
of all the algorithms over 30 runs are compared. In order to
determine whether the results obtained by SOA are statistically
different from the results generated by other algorithms,
T -tests are conducted. An h value of one indicates that the
performances of the two algorithms are statistically different
with 95% certainty, whereas an h value of zero implies that the
performances are not statistically different. CI is confidence
interval. The results are summarized in Table I, where the
results for examples 4, 6, and 7 are not listed but presented
as follows. For example 4, GA has Best = 1.17 108 ,
M ean = 4.72 106 , Std = 1.01 105 , h = 1, and CI =
[8.42 106 1.02 106 ], while SPSO-2007 has
Best = 0, M ean = 1.29 1032 , Std = 7.06 1032 ,
h = 0, and CI = [3.87 1032 1.29 1032 ]. For example 6, GA has Best = 2.05 109 , M ean = 8.61
106 , Std = 7.82 106 , h = 1, and CI = [1.15
105 5.76 106 ], while SPSO-2007 has Best = 0,
M ean = 8.20 1030 , Std = 4.49 1029 , h = 0, and
CI = [2.46 1029 8.21 1030 ]. For example 7, GA

has Best = 4.71 106 , M ean = 3.84 103 , Std =


1.97 102 , h = 0, and CI = [0.0110 0.0034]; SPSO2007 has Best = 1.72 106 , M ean = 6.92 101 , Std =
1.32 10+0 , h = 1, and CI = [1.1747 0.2099]; PSO-w
has Best = 0, M ean = 4.55 1031 , Std = 1.40 1030 ,
h = 0, and CI = [9.68 1031 5.73 1032 ]; while
PSO-cf has Best = 0, M ean = 1.12 1031 , Std = 6.13
1031 , h = 0, and CI = [3.36 1031 1.12 1031 ]. The
other algorithms, including SOA, have found global optimal
solutions with objective values of zero for examples 4, 6, and 7.
From the experimental results, the conclusion can be drawn
that SOA has generally better, or at least equivalent, performance than the listed other algorithms in terms of global search
capacity and local search precision. For examples 1 and 2, SOA
has the smallest Best and M ean values and is statistically
better than all the other methods with h values of one and
CI values that are less than zero. For example 3, SOA has
the smallest Best value and is statistically superior to GA,
four PSOs, and SaDE. However, SOA is a little statistically
inferior to DE, but they still have the very close performances.
In addition, the performances of SOA and SACP-DE are not
statistically different. For examples 4, 6, and 7, SOA can
successfully find optimal IIR filter coefficients with objective
function values of zero in each run. Other algorithms can also
find optimal solutions, except GA and SPSO-2007, for all the
three examples, as well as PSO-w and PSO-cf for example 7,
which are then outperformed by SOA. For example 5, SOA
has significantly better search precision with the smallest error
values than all the other listed algorithms, although SOA is
not statistically different from most of the other methods.
Even though the original DE is often as good as, or even

DAI et al.: SEEKER OPTIMIZATION ALGORITHM FOR DIGITAL IIR FILTER DESIGN

better than, the two modified DEs, choosing its suitable control
parameter values is a problem-dependent and time-consuming
task [38]. Its parameters used in this paper are based on
the proposed values from [38]. In our experiments, the DE
with other parameter values often has relatively unsatisfactory
performance.
Furthermore, the average evolutionary curves of all the algorithms over 30 runs are shown in Figs. 410 (see Appendix II).
Moreover, the average CPU time and generations needed for
each algorithm to achieve the fixed accuracy level in the
successful runs are shown in Table II. The successful runs
are considered, during which the algorithms achieve the fixed
accuracy within the maximum generations. From Figs. 410
and Table II, it is indicated that SOA has good convergence
performance to achieve the fixed accuracy level with acceptable
computation time and generations. Although SPSO-2007 often
needs less computing time due to its small population size, it
also has fewer successful runs.
In addition, the minimax error criterion was employed to
design digital IIR filters using the same examples, and we found
that the commensurate results can be obtained.

Fig. 4. Average evolutionary curves of all the algorithms for example 1.

V. C ONCLUSION
The SOA is a novel heuristic stochastic optimization algorithm based on simulating the act of human searching. The
algorithm has the additional advantages of being easy to understand and simple to implement so that it can be used for a
wide variety of design and optimization tasks. In this paper, an
SOA-based digital filter design method has been proposed, and
the benefits of SOA for designing digital IIR filters have been
studied. The simulation results show that SOA has better, or at
least equivalent, global search ability and convergence speed
than GA, four PSOs, and three DEs for most of the chosen and
widely used problems in this paper. Thus, it is believed that
the proposed algorithm is capable of quickly and effectively
adapting the coefficients of a wide variety of IIR structures and
will become a promising candidate for digital filter design.

Fig. 5. Average evolutionary curves of all the algorithms for example 2.

A PPENDIX I
Example 1: This example is taken from [1], [2], [5],
[10], and [12]. The system and filter transfer functions are,
respectively,
0.05 0.4z 1
1 1.1314z 1 + 0.25z 2
a0
Hf (z) =
.
1 + b1 z 1
Hs (z) =

The input x(k) to the system and the filter was a white-noise
sequence. The data length was N = 100.
Example 2: This example is taken from [5] and [10]
Hs (z) =

0.3 + 0.4z 1 0.5z 2


1 1.2z 1 + 0.5z 2 0.1z 3

Hf (z) =

a0 + a1 z 1
.
1 + b1 z 1 + b2 z 2

Fig. 6. Average evolutionary curves of all the algorithms for example 3.

1715

1716

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 57, NO. 5, MAY 2010

Fig. 7. Average evolutionary curves of all the algorithms for example 4.

Fig. 8. Average evolutionary curves of all the algorithms for example 5.

The system input was a uniformly distributed white-noise sequence, taking values from (0.5, 0.5); SN R = 30 dB; and
N = 100.
Example 3: This example is taken from [5] and [7]
Hs (z) =

1.0 1.8z 2 + 1.04z 4 + 0.05z 5 + 0.192z 6


1 0.8z 2 0.17z 4 0.56z 6

Hf (z) =

a0 + a1 z 1 + + a5 z 5
.
1 + b1 z 1 + + b5 z 5

The system input was a Gaussian pseudonoise sequence, and


N = 1000.
Example 4: This example is taken from [1], [2], [7], and [12]
Hs (z) =

1 1.2z 1 + 0.6z 2
a0
Hf (z) =
.
1 + b1 z 1 + b2 z 2
The system input was a unit-variance white Gaussian
pseudonoise sequence, and N = 100.

Fig. 9.

Fig. 10.

Average evolutionary curves of all the algorithms for example 6.

Average evolutionary curves of all the algorithms for example 7.

Example 5: This example is taken from [13]


Hs (z) =

1.25z 1 0.25z 2
1 0.3z 1 + 0.4z 2

Hf (z) =

a0 + a1 z 1 + a2 z 2
.
1 + b1 z 1 + b2 z 2

The system input was a white-noise input, SN R = 40 dB, and


N = 100.
Example 6: This example is taken from [7] and [13]
1
1 1.4z 1 + 0.49z 2
a0
Hf (z) =
.
1 + b1 z 1 + b2 z 2
Hs (z) =

Colored noise inputs were obtained by filtering a white


Gaussian pseudonoise sequence with an FIR filter Hc (z) =
(1 0.7z 1 )2 (1 + 0.7z 1 )2 . N = 100.

DAI et al.: SEEKER OPTIMIZATION ALGORITHM FOR DIGITAL IIR FILTER DESIGN

1717

TABLE II
R ESULTS OF E ACH A LGORITHM TO ACHIEVE THE F IXED ACCURACY L EVEL OVER 30 RUNS

Example 7: This example is taken from [7], [13], and [14]


1
(1 0.6z 1 )3
a0
Hf (z) =
.
1 + b1 z 1 + b2 z 2
Hs (z) =

Colored inputs were generated by filtering a white-noise sequence by an FIR filter Hc (z) = (1 0.6z 1 )2 (1 + 0.6z 1 )2 .
N = 2000.
A PPENDIX II
See Figs. 410.
R EFERENCES
[1] N. Karaboga, A. Kalinli, and D. Karaboga, Designing IIR filters using
ant colony optimisation algorithm, J. Eng. Appl. Artif. Intell., vol. 17,
no. 3, pp. 301309, Apr. 2004.
[2] A. Kalinli and N. Karaboga, Artificial immune algorithm for IIR filter
design, J. Eng. Appl. Artif. Intell., vol. 18, no. 5, pp. 919929, Dec. 2005.
[3] J. Luukko and K. Rauma, Open-loop adaptive filter for power electronics
applications, IEEE Trans. Ind. Electron., vol. 55, no. 2, pp. 910917,
Feb. 2008.
[4] S. H. Ling, H. H. C. Iu, F. H. F. Leung, and K. Y. Chan, Improved
hybrid particle swarm optimized wavelet neural network for modeling the
development of fluid dispensing for electronic packaging, IEEE Trans.
Ind. Electron., vol. 55, no. 9, pp. 34473460, Sep. 2008.
[5] A. Kalinli and N. Karaboga, A new method for adaptive IIR filter design based on Tabu search algorithm, Int. J. Electron. Commun. (AE),
vol. 59, no. 2, pp. 111117, 2005.
[6] K. S. Tang, K. F. Man, S. Kwong, and Z. F. Liu, Design and optimization
of IIR filter structure using hierarchical genetic algorithms, IEEE Trans.
Ind. Electron., vol. 45, no. 3, pp. 481487, Jun. 1998.
[7] M. S. White and S. J. Flockton, Adaptive recursive filtering using evolutionary algorithms, in Evolutionary Algorithms in Engineering Applications, D. Dasgupta and Z. Michalewicz, Eds. Berlin, Germany:
Springer-Verlag, 1997, pp. 361376.
[8] J.-T. Tsai, J.-H. Chou, and T.-K. Liu, Optimal design of digital IIR filters
by using hybrid Taguchi genetic algorithm, IEEE Trans. Ind. Electron.,
vol. 53, no. 3, pp. 867879, Jun. 2006.
[9] Y. Yu and Y. Xinjie, Cooperative coevolutionary genetic algorithm for
digital IIR filter design, IEEE Trans. Ind. Electron., vol. 54, no. 3,
pp. 13111318, Jun. 2007.
[10] S. Chen, R. H. Istepanian, and B. L. Luk, Digital IIR filter design using
adaptive simulated annealing, Digit. Signal Process., vol. 11, no. 3,
pp. 241251, Jul. 2001.
[11] R. Storn, Designing nonstandard filters with differential evolution,
IEEE Signal Process. Mag., vol. 22, no. 1, pp. 103106, Jan. 2005.
[12] N. Karaboga, Digital IIR filter design using differential evolution algorithm, EURASIP J. Appl. Signal Process., vol. 2005, no. 8, pp. 1269
1276, Jan. 2005.
[13] D. J. Krusienski and W. K. Jenkins, Particle swarm optimization for
adaptive IIR filter structures, in Proc. Congr. Evol. Comput., Jun. 2004,
pp. 965970.
[14] D. J. Krusienski and W. K. Jenkins, Design and performance of adaptive
systems based on structured stochastic optimization, IEEE Circuits Syst.
Mag., vol. 5, no. 1, pp. 820, Mar. 2005.

[15] J. Timmis, T. Knight, L. N. de Castro, and E. Hart, An Overview of Artificial Immune Systems, R. Paton, H. Bolouri, M. Holcombe, J. H. Parish,
and R. Tateson, Eds. Berlin, Germany: Springer-Verlag, 2004, ser. Computation in Cells and Tissues: Perspectives and Tools for Thought-Natural
Computation Series, pp. 5186.
[16] I. I. Garibay, The proteomics approach to evolutionary computation: An
analysis of proteome-based location independent representations based
on the proportional genetic algorithm, Ph.D. dissertation, Comput. Sci.
School, College Eng. Comput. Sci., Univ. Central Florida, Orlando, FL,
2004.
[17] M. Dorigo, V. Maniezzo, and A. Colorni, The ant system: Optimization
by a colony of cooperating agents, IEEE Trans. Syst., Man, Cybern. B,
Cybern., vol. 26, no. 1, pp. 2941, Feb. 1996.
[18] C. F. Juang, C. M. Lu, C. Lo, and C.-Y. Wang, Ant colony optimization
algorithm for fuzzy controller design and its FPGA implementation,
IEEE Trans. Ind. Electron., vol. 55, no. 3, pp. 14531462, Mar. 2008.
[19] Z. Fan, J. Liu, T. Srensen, and P. Wang, Improved differential evolution based on stochastic ranking for robust layout synthesis of MEMS
components, IEEE Trans. Ind. Electron., vol. 56, no. 4, pp. 937948,
Apr. 2009.
[20] A. Nobakhti and H. Wang, A simple self-adaptive differential evolution
algorithm with application on the ALSTOM gasifier, Appl. Soft Comput.,
vol. 8, no. 1, pp. 350370, Jan. 2008.
[21] D. Zaharie, Control of population diversity and adaptation in differential
evolution algorithms, in Proc. 9th Int. Conf. Soft Comput., R. Matousek
and P. Osmera, Eds., 2003, pp. 4146.
[22] L. dos Santos Coelho and B. M. Herrera, Fuzzy identification based on a
chaotic particle swarm optimization approach applied to a nonlinear yo-yo
motion system, IEEE Trans. Ind. Electron., vol. 54, no. 6, pp. 32343245,
Dec. 2007.
[23] B. Biswal, P. K. Dash, and B. K. Panigrahi, Power quality disturbance
classification using fuzzy C-means algorithm and adaptive particle swarm
optimization, IEEE Trans. Ind. Electron., vol. 56, no. 1, pp. 212220,
Jan. 2009.
[24] C. Dai, Y. Zhu, and W. Chen, Seeker optimization algorithm, in Computational Intelligence and Security, vol. 4456, Lecture Notes in Artificial
Intelligence, Y. Wang, Y. Cheung, and H. Liu, Eds. Berlin, Germany:
Springer-Verlag, 2007, pp. 167176.
[25] J. D. Hewlett, B. M. Wilamowski, and G. Dndar, Optimization using a
modified second-order approach with evolutionary enhancement, IEEE
Trans. Ind. Electron., vol. 55, no. 9, pp. 33743380, Sep. 2008.
[26] V. Kumar and F. Sahin, Cognitive maps in swarm robots for the mine
detection application, in Proc. IEEE Int. Conf. Syst., Man, Cybern., 2003,
vol. 4, pp. 33643369.
[27] D. Eustace, D. P. Barnes, and J. O. Gray, Co-operant mobile robots
for industrial applications, in Proc. Int. Conf. Ind. Electron., Control,
Instrum., 1993, vol. 1, pp. 3944.
[28] Y. del Valle, G. K. Venayagamoorthy, S. Mohagheghi, J. C. Hernandez,
and R. G. Harley, Particle swarm optimization: Basic concepts, variants
and applications in power systems, IEEE Trans. Evol. Comput., vol. 12,
no. 2, pp. 171195, Apr. 2008.
[29] V. Trianni, R. Gro, T. H. Labella, E. Sahin, and M. Dorigo, Evolving aggregation behaviors in a swarm of robots, in Proc. ECAL,
vol. 2801, LNAI, W. Banzhaf, T. Christaller, P. Dittrich, J. T. Kim, and
J. Ziegler, Eds., 2003, pp. 865874.
[30] M. Wooldridge and N. R. Jennings, Intelligent agents: Theory and practice, Knowl. Eng. Rev., vol. 10, no. 2, pp. 115152, 1995.
[31] I. Ajzen, Residual effects of past on later behavior: Habituation and
reasoned action perspectives, Personal. Soc. Psychol. Rev., vol. 6, no. 2,
pp. 107122, 2002.
[32] B. Raphael and I. F. C. Smith, A direct stochastic algorithm for
global search, Appl. Math. Comput., vol. 146, no. 2/3, pp. 729758,
Dec. 2003.

1718

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 57, NO. 5, MAY 2010

[33] L. A. Zadeh, The concept of linguistic variable and its application to


approximate reasoning, Inf. Sci., vol. 8, pp. 199246, 1975.
[34] D. Li, Uncertainty reasoning based on cloud models in controllers,
Comput. Math. Appl., vol. 35, no. 3, pp. 99123, Feb. 1998.
[35] Y. Liu, Z. Gin, and Z. Shi, Hybrid particle swarm optimizer
with line search, in Proc. IEEE Int. Conf. Syst., Man, Cybern., 2004,
pp. 37513755.
[36] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine
Learning. Reading, MA: Addison-Wesley, 1989.
[37] M. Clerc, When Nearer is Better. (2007). [Online]. Available: http://hal.
inria.fr/docs/00/15/20/53/PDF/NisBetter2.pdf
[38] J. Brest, S. Greiner, B. Bokovic, M. Mernik, and V. Zumer, Selfadapting control parameters in differential evolution: A comparative study
on numerical benchmark problems, IEEE Trans. Evol. Comput., vol. 10,
no. 6, pp. 646657, Dec. 2006.
[39] A. K. Qin and P. N. Suganthan, Self-adaptive differential evolution algorithm for numerical optimization, in Proc. IEEE Congr. Evol. Comput.,
Edinburgh, U.K., 2005, pp. 17851791.
[40] Y. Shi and R. Eberhart, Empirical study of particle swarm optimization,
in Proc. Congr. Evol. Comput., 1999, pp. 19451950.
[41] M. Clerc and J. Kennedy, The particle swarmExplosion, stability, and
convergence in a multidimensional complex space, IEEE Trans. Evol.
Comput., vol. 6, no. 1, pp. 5873, Feb. 2002.
[42] J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Trans. Evol. Comput., vol. 10, no. 3, pp. 6782,
Jun. 2006.
[43] Standard PSO 2007 (SPSO-2007) on the Particle Swarm Central, Programs Section. [Online]. Available: http://www.particleswarm.info

Chaohua Dai received the Ph.D. degree in electric system control and information technique from
Southwest Jiaotong University, Chengdu, China,
in 2009.
He is currently an Associate Professor in
the School of Electrical Engineering, Southwest
Jiaotong University. His research interests include
computational intelligence, multiobjective optimization, pattern recognition, and intelligent monitoring
and control.

Weirong Chen (M99) received the Ph.D. degree


in electrical engineering from Southwest Jiaotong
University, Chengdu, China, in 1998.
He is currently a Professor and a Ph.D. Supervisor
in the School of Electrical Engineering, Southwest
Jiaotong University. His research interests include
neural networks, intelligent monitoring and control,
and power systems and their automation.

Yunfang Zhu received the M.Sc. degree in measuring and control technology and instrumentation from Southwest Jiaotong University, Chengdu,
China, in 2005.
She is currently an Associate Professor in the
Department of Computer and Communication Engineering, Southwest Jiaotong University, Emei,
China. Her research interests include neural networks, evolutionary algorithms, signal processing,
and intelligent control.

Vous aimerez peut-être aussi