Académique Documents
Professionnel Documents
Culture Documents
(a)
10
10
10
10
10
10
10
Gradient Magnitude
(c)
Ground Truth
SAPCABM3D
GHP
10
I. INTRODUCTION
(b)
0
10
Probability
(d)
i=1
i i 1 , s.t. x = D ,
(5)
R(x) =
i
i =
wqi qi ,
q
(6)
1
W
exp 1h x i x qi
(xi and x qi denote the current estimates of xi and xqi , respectively), where h is a predefined constant and W is the
normalization factor. More detailed explanations on NCSR can
be found in [14].
By incorporating the above R(x) into Eq. (3), the proposed
1RLV\ ,PDJH
h r = arg min h x h y h x h
x = arg min x, F
1
2
y x + i
s.t. x = D
i 1
, h F = hr
+ F ( x ) x
hy
+ c R (hx )
'HQRLVHG ,PDJH
minx 21 2 y x2 + i i i 1 + g x2
. (8)
s.t. x = D
We use the method in [14] to construct the dictionary D
adaptively. Based on the current estimation of image x, we
cluster its patches into K clusters, and for each cluster, a
PCA dictionary is learned. Then for each given patch, we
first check which cluster it belongs to, and then use the PCA
dictionary of this cluster as D. Although in Eq. (8) the l1 norm regularization is imposed on i i 1 rather than i 1 ,
by introducing a new variable i = i i , we can use the
iterative shrinkage / thresholding method [33] to update i
and then update i = i + i . This strategy is also used in [14]
i=1
(13)
(15)
i = q wqi qi
7. Update :
(
)
(k+1)
= S /d (k+1/2)
i + i
i
i
8. Update x
x(k+1) = D (k+1)
9. F (x) = sgn (x) T (|x|)
10. k k + 1 (
)
11. x = x(k) + T (g x(k) )
x 10
Energy
6
4
2
0
10
15
The number of iterations
20
C. Region-based GHP
The histogram constraint in Eq. (7) is global. If the image
consists of dierent regions with dierent textures, GHP may
generate some false textures in the less textured areas. To
address this problem, we can partition the noisy image into
several regions, estimate the reference gradient histogram of
each region, and then apply GHP to each region for denoising.
As shown in Fig. 4, we suggest two schemes to partition the
noisy image, resulting in two region-based GHP variants. The
( (
)
)2
min
Fk (x)i j (x)i j s.t. hFk = hr,k .
(16)
Fk
(i, j)k
F(x) =
Fk (x)1k .
k
(a)
(17)
(18)
(b)
Fig. 4: Two image partition schemes. (a) The noisy image is partitioned into K homogeneous regions
by k-means clustering. (b) The
noisy image is partitioned into K K blocks.
(20)
py (y = t) =
p x (x = a) p ( = (t a)) da.
a
(21)
If we use the normalized histogram h x and hy to approximate p x and py , we can rewrite Eq. (21) in the discrete domain
as:
hy = h x h ,
(22)
where denotes the convolution operator. Note that h can be
obtained by discretizing p , and hy can be computed directly
from the noisy observation y.
Obviously, the estimation of h x can be generally modeled
as a deconvolution problem:
{
}
2
hr = arg minhx hy h x h + c R (h x ) ,
(23)
where c is a constant and R(h x ) is some regularization term
based on the prior information of natural images gradient
histogram. We consider two kinds of constraints on h x . First,
it has been shown that p x (i.e., the continuous counterpart of
h x ) can be approximated by hyper-Laplacian distribution [3],
[23], [24]. Considering that the real h x might deviate from the
hyper-Laplacian distribution to some extent, we only require
that h x should be close to the hyper-Laplacian distribution:
p x C exp (|x| ) ,
(24)
hy h x h
exp(|x|
)
. (26)
+ h x h 2
x
s.t. hx 0
We iteratively update h x , hx , C, , and alternatively. Let
h0 = C exp(|x| ), h x is updated by
hx =
(27)
where denotes the element-wise multiplication, denotes the element-wise division, and denotes the complex
conjugate operator. hx is updated by
hx (i) = max (h x (i), 0) .
C is updated by
C=
exp(|i| )
.
i h x (i)
(28)
(29)
(
)
(t+1) = (t) +
C|i| exp((t) |i| ) C exp((t) |i| h x (i) ,
i
(30)
{ C(t) |i| ln |i| exp((t) |i| ) }
(t+1)
(t)
(
)
. (31)
= +
|i 0|
C exp((t) |i| h x (i)
Fig. 6: Ten test images. From left to right and top to bottom, they
are labeled as 1 to 10.
A. Parameter setting
There are 4 parameters in our GHP algorithm and 4 parameters in the reference histogram estimation algorithm. All
these parameters are fixed in our experiments.
1) Parameters in the GHP algorithm: The proposed GHP
algorithm has two model parameters: , and . We use the
same strategy as in the original NCSR model [14] to determine
the value of . The parameter is introduced to balance
the nonlocally centralized sparse representation term and the
histogram preservation term. If is is set very large, GHP
can ensure that the gradient histogram of the denoising result
is the same as the reference histogram. Considering that in
practice the reference histogram is estimated from the noisy
image and there are certain estimation errors, cannot be set
too big. We empirically set to 5 based on our experimental
experience.
The GHP algorithm involves two more algorithm parameters: and d. Following [14], when the noise standard
deviation is less than 30, we set to 0.23; when else we set
to 0.26. Based on [14], [33], to guarantee the convexity of the
surrogate function, d should be larger than the spectral norm
of dictionary D. Since in our algorithm D is an adaptively
1 We also evaluated the denoising performance of S-GHP under lower and
higher noise standard deviations, i.e., = 5, 10, 15, and = 50, 80, 100.
The detailed results can be found in the supplementary file.
1000
Real
Simulated
2000
1000
x 10
Real
Simulated
1.5
Pixels
Real
Simulated
2000
3000
Pixels
Pixels
3000
1
0.5
50
100
150
200
Gradient Magnitude
250
300
(a)
50
100
150
200
Gradient Magnitude
250
300
(b)
50
100
150
200
Gradient Magnitude
250
300
(c)
Fig. 5: An example of reference gradient histogram estimation. (a) Real and simulated AWGN gradient histograms (noise standard deviation
= 30); (b) real and simulated gradient histograms of noisy image; and (c) real and estimated gradient histograms of the clean image.
TABLE I: The K-L divergence between the estimated and groundtruth gradient histograms.
1
2
3
4
5
6
7
8
9
10
Avg.
Std.
20
0.098
0.036
0.094
0.014
0.023
0.233
0.066
0.123
0.058
0.015
0.076
0.067
25
0.121
0.036
0.089
0.017
0.030
0.261
0.071
0.167
0.073
0.028
0.089
0.076
30
0.138
0.027
0.086
0.014
0.051
0.254
0.085
0.192
0.066
0.038
0.095
0.077
35
0.118
0.020
0.091
0.014
0.064
0.218
0.061
0.161
0.042
0.071
0.086
0.064
40
0.095
0.014
0.152
0.017
0.128
0.151
0.029
0.103
0.026
0.140
0.086
0.058
TABLE II: The PSNR (dB) and SSIM results of GHP with the estimated gradient histogram (GHP-E) and with the ground truth gradient
histogram (GHP-G).
1
2
3
4
5
6
7
8
9
10
Avg.
=
GHP-G
30.67
0.870
27.93
0.813
28.13
0.758
26.69
0.803
30.65
0.805
28.54
0.887
30.14
0.839
31.43
0.893
27.45
0.819
30.93
0.813
29.26
0.830
20
GHP-E
30.59
0.866
27.90
0.808
28.15
0.757
26.59
0.795
30.54
0.802
28.39
0.866
30.07
0.834
31.27
0.884
27.31
0.806
30.83
0.811
29.16
0.823
=
GHP-G
29.53
0.843
26.82
0.769
27.19
0.722
25.51
0.757
29.68
0.773
27.24
0.854
29.11
0.804
30.33
0.870
26.26
0.779
29.95
0.780
28.16
0.795
25
GHP-E
29.40
0.833
26.72
0.761
27.12
0.720
25.41
0.748
29.52
0.769
27.13
0.833
28.99
0.800
30.14
0.858
26.12
0.764
29.75
0.776
28.03
0.786
=
GHP-G
28.62
0.817
26.00
0.731
26.42
0.692
24.55
0.715
28.87
0.743
26.22
0.820
28.33
0.774
29.47
0.848
25.33
0.743
29.19
0.752
27.30
0.764
30
GHP-E
28.47
0.808
25.87
0.725
26.34
0.690
24.46
0.708
28.61
0.737
26.12
0.804
28.15
0.769
29.23
0.835
25.21
0.730
28.85
0.745
27.13
0.755
=
GHP-G
27.91
0.800
25.43
0.699
25.77
0.662
23.93
0.677
28.38
0.723
25.52
0.799
27.79
0.752
28.83
0.840
24.61
0.710
28.76
0.734
26.69
0.740
35
GHP-E
27.78
0.799
25.28
0.697
25.71
0.661
23.84
0.676
28.07
0.719
25.44
0.797
27.58
0.750
28.56
0.837
24.52
0.709
28.32
0.727
26.51
0.737
=
GHP-G
27.29
0.781
24.90
0.668
25.22
0.640
23.31
0.643
27.83
0.701
24.90
0.772
27.29
0.729
28.26
0.824
24.01
0.681
28.26
0.714
26.13
0.715
40
GHP-E
27.01
0.779
24.62
0.663
25.12
0.638
23.16
0.640
27.26
0.693
24.76
0.771
26.93
0.726
27.82
0.821
23.88
0.680
27.50
0.703
25.81
0.711
TABLE III: The PSNR (dB) and SSIM results of GHP, B-GHP, and S-GHP.
GHP
1
2
3
4
5
6
7
8
9
10
Avg.
= 20
B-GHP S-GHP
GHP
= 25
B-GHP S-GHP
GHP
= 30
B-GHP S-GHP
GHP
= 35
B-GHP S-GHP
GHP
= 40
B-GHP S-GHP
30.59
0.866
27.90
0.808
28.15
0.757
26.59
0.795
30.54
0.802
28.39
0.866
30.07
0.834
31.27
0.884
27.31
0.806
30.83
0.811
30.53
0.869
27.89
0.815
28.07
0.753
26.60
0.801
30.63
0.807
28.36
0.880
30.06
0.839
31.15
0.887
27.35
0.817
30.96
0.816
30.60
0.869
27.97
0.814
28.17
0.753
26.72
0.801
30.65
0.807
28.46
0.883
30.22
0.843
31.34
0.895
27.40
0.818
30.98
0.815
29.40
0.833
26.72
0.761
27.12
0.720
25.41
0.748
29.52
0.769
27.13
0.833
28.99
0.800
30.14
0.858
26.12
0.764
29.75
0.776
29.40
0.841
26.79
0.771
27.12
0.718
25.47
0.759
29.66
0.775
27.08
0.848
29.00
0.805
29.86
0.858
26.16
0.777
29.94
0.783
29.47
0.842
26.88
0.771
27.25
0.720
25.53
0.756
29.77
0.774
27.19
0.848
29.12
0.807
30.21
0.872
26.22
0.777
29.93
0.782
28.47
0.808
25.87
0.725
26.34
0.690
24.46
0.708
28.61
0.737
26.12
0.804
28.15
0.769
29.23
0.835
25.21
0.730
28.85
0.745
28.55
0.818
25.98
0.735
26.36
0.688
24.58
0.719
28.86
0.745
26.09
0.819
28.22
0.777
29.14
0.841
25.26
0.742
29.18
0.756
28.60
0.818
26.07
0.734
26.43
0.688
24.67
0.718
28.99
0.751
26.26
0.820
28.39
0.780
29.42
0.853
25.31
0.744
29.15
0.754
27.78
0.799
25.28
0.697
25.71
0.661
23.84
0.676
28.07
0.719
25.44
0.797
27.58
0.750
28.56
0.837
24.52
0.709
28.32
0.727
27.82
0.796
25.42
0.695
25.71
0.657
23.93
0.673
28.29
0.716
25.41
0.793
27.66
0.745
28.59
0.830
24.59
0.707
28.67
0.729
27.85
0.801
25.43
0.698
25.75
0.659
23.97
0.676
28.38
0.723
25.58
0.802
27.81
0.754
28.80
0.843
24.57
0.708
28.74
0.736
27.01
0.779
24.62
0.663
25.12
0.638
23.16
0.640
27.26
0.693
24.76
0.771
26.93
0.726
27.82
0.821
23.88
0.680
27.50
0.703
27.23
0.779
24.86
0.665
25.19
0.636
23.31
0.640
27.77
0.696
24.77
0.768
27.20
0.725
28.16
0.822
23.99
0.680
28.20
0.711
27.22
0.781
24.87
0.666
25.21
0.636
23.34
0.639
27.88
0.704
24.96
0.775
27.30
0.730
28.21
0.829
24.05
0.684
28.15
0.711
29.16
0.823
29.16
0.828
29.25
0.830
28.03
0.786
28.05
0.794
28.16
0.795
27.13
0.755
27.22
0.764
27.33
0.766
26.51
0.737
26.61
0.734
26.69
0.740
25.81
0.711
26.07
0.712
26.12
0.716
(a)
(c)
(b)
(d)
(e)
Fig. 7: (a) The noisy test image 6 (noise standard deviation = 35). From (b) to (e): the zoom-in denoising results by GHP, B-GHP and
S-GHP, and the ground truth.
D. Subjective Evaluation
How to evaluate the perceptual quality of an image is a very
challenging problem. Though many image quality assessment
(IQA) indices (e.g., SSIM [39] and FSIM [51]) have been
developed and they can well predict the perceptual quality
of images with a single type of distortion such as noise
corruption, Gaussian blur and compression artifacts, they are
still far from satisfying to faithfully evaluate the subjective
quality of denoised images, whose distortion is much more
complex. This is also why the denoised images by our methods
have better visual quality, but their SSIM indices are similar
to the denoised images by other methods.
10
TABLE IV: The PSNR (dB) and SSIM values of SAPCA-BM3D (BM3D), LSSC, NCSR and S-GHP.
BM3D
30.83
1
0.876
28.07
2
0.817
28.39
3
0.755
26.86
4
0.803
30.88
5
0.812
28.59
6
0.888
30.17
7
0.839
31.58
8
0.900
27.58
9
0.821
31.23
10
0.823
29.42
Avg.
0.833
= 20
LSSC NCSR S-GHP
30.69 30.59 30.60
0.872 0.869 0.869
27.98 27.91 27.97
0.815 0.807 0.814
28.46 28.11 28.17
0.762 0.736 0.753
26.75 26.65 26.72
0.803 0.782 0.801
30.75 30.64 30.65
0.809 0.802 0.807
28.47 28.49 28.46
0.883 0.882 0.883
30.18 30.13 30.22
0.840 0.833 0.843
31.38 31.41 31.34
0.894 0.897 0.895
27.58 27.34 27.40
0.822 0.804 0.818
31.04 30.98 30.98
0.818 0.813 0.815
29.33 29.23 29.25
0.832 0.823 0.830
BM3D
29.66
0.849
26.99
0.773
27.43
0.721
25.68
0.758
29.96
0.780
27.32
0.856
29.14
0.803
30.48
0.879
26.37
0.778
30.28
0.791
28.33
0.799
= 25
LSSC NCSR S-GHP
29.56 29.46 29.47
0.846 0.843 0.842
26.94 26.87 26.88
0.773 0.764 0.771
27.52 27.16 27.25
0.728 0.702 0.720
25.61 25.52 25.53
0.758 0.737 0.756
29.81 29.68 29.77
0.776 0.770 0.774
27.26 27.24 27.19
0.850 0.851 0.848
29.18 29.14 29.12
0.807 0.799 0.807
30.33 30.35 30.21
0.872 0.877 0.872
26.40 26.18 26.22
0.782 0.764 0.777
30.08 30.03 29.93
0.787 0.781 0.782
28.27 28.16 28.16
0.798 0.789 0.795
BM3D
28.75
0.825
26.18
0.734
26.66
0.692
24.79
0.715
29.21
0.754
26.35
0.824
28.35
0.771
29.64
0.861
25.44
0.740
29.53
0.763
27.49
0.768
= 30
LSSC NCSR S-GHP
28.62 28.58 28.60
0.820 0.820 0.818
26.14 26.08 26.07
0.734 0.727 0.734
26.66 26.39 26.43
0.696 0.675 0.688
24.76 24.64 24.67
0.717 0.697 0.718
29.04 28.91 28.99
0.744 0.742 0.751
26.33 26.30 26.26
0.825 0.820 0.820
28.40 28.38 28.39
0.775 0.770 0.780
29.54 29.52 29.42
0.858 0.860 0.853
25.48 25.31 25.31
0.748 0.729 0.744
29.36 29.30 29.15
0.755 0.755 0.754
27.43 27.34 27.33
0.767 0.760 0.766
BM3D
28.02
0.803
25.54
0.699
26.01
0.667
24.08
0.677
28.58
0.730
25.59
0.794
27.71
0.744
28.94
0.843
24.73
0.707
28.92
0.740
26.81
0.740
= 35
LSSC NCSR S-GHP
27.91 27.76 27.85
0.800 0.793 0.801
25.51 25.37 25.43
0.700 0.681 0.698
26.03 25.64 25.75
0.670 0.640 0.659
24.06 23.84 23.97
0.678 0.640 0.676
28.41 28.27 28.38
0.718 0.710 0.723
25.59 25.49 25.58
0.795 0.788 0.802
27.81 27.71 27.81
0.751 0.738 0.754
28.86 28.79 28.80
0.840 0.841 0.843
24.77 24.47 24.57
0.716 0.683 0.708
28.75 28.76 28.74
0.732 0.728 0.736
26.77 26.61 26.69
0.740 0.724 0.740
(a)
(b)
(c)
(d)
(e)
(f)
BM3D
27.41
0.784
25.02
0.668
25.46
0.647
23.50
0.641
28.06
0.709
24.97
0.765
27.18
0.721
28.37
0.828
24.15
0.677
28.42
0.721
26.25
0.716
= 40
LSSC NCSR S-GHP
27.32 27.19 27.22
0.781 0.776 0.781
24.98 24.87 24.87
0.670 0.651 0.666
25.47 25.10 25.21
0.647 0.621 0.636
23.48 23.26 23.34
0.643 0.604 0.639
27.90 27.76 27.88
0.696 0.690 0.704
24.98 24.90 24.96
0.769 0.761 0.775
27.32 27.22 27.30
0.729 0.717 0.730
28.32 28.24 28.21
0.826 0.827 0.829
24.19 23.92 24.05
0.687 0.655 0.684
28.24 28.28 28.15
0.712 0.710 0.711
26.22 26.07 26.12
0.716 0.701 0.716
Fig. 8: Denoising results on image 7. (a) Noisy image with AWGN of standard deviation 30; cropped and zoom-in denoised images by
(b) SAPCA-BM3D [11], (c) LSSC [12], (d) NCSR [14], and (e) S-GHP; (f) ground truth. The PSNR and SSIM values are shown in the
close-ups.
11
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 9: Denoising results on image 1. (a) Noisy image with AWGN of standard deviation 30; cropped and zoom-in denoised images by
(b) SAPCA-BM3D [11], (c) LSSC [12], (d) NCSR [14], and (e) S-GHP; (f) ground truth. The PSNR and SSIM values are shown in the
close-ups.
12
Fig. 10: From left to right and from top to bottom: original image, the dierence PSNR map (pBM3D pGHP ), the denoised images by
SAPCA-BM3D and S-GHP.
(a)
(b)
(c)
Fig. 11: The distributions of subjective evaluation outputs. (a) S-GHP vs. SAPCA-BM3D; (b) S-GHP vs. LSSC; (c) S-GHP vs. NCSR.
ACKNOWLEDGEMENT
References
13
[30] D. Zoran and Y. Weiss, From learning models of natural image patches
to whole image restoration, in Proc. Int. Conf. Comput. Vis., pp. 479486, 6-13 Nov. 2011.
[31] A. Levin, B. Nadler, F. Durand, and W. T. Freeman, Patch complexity,
finite pixel correlations and optimal denoising, in Proc. Eur. Conf.
Comput. Vis., 2012.
[32] J. Jancsary, S. Nowozin, and C. Rother, Loss-specific training of nonparametric image restoration models: a new state of the art, in Proc.
Eur. Conf. Comput. Vis., 2012.
[33] I. Daubechies, M. Defriese, and C. DeMol, An iterative thresholding
algorithm for linear inverse problems with a sparsity constraint, Commun. Pure Appl. Math., vol. 57, no. 11, pp. 1413-1457, Nov. 2004.
[34] R. C. Gonzalez and R. E. Woods, Digital Image Processing, Publishing
House of Electronics Industry, Beijing, 2005.
[35] L. Lin, X. Liu, and S.-C. Zhu, Layered graph matching with composite
cluster sampling, IEEE. Trans. Pattern Anal. Mach. Intell., vol 32, no.
8, pp. 1426-1442, Aug. 2010.
[36] L. Lin, X. Liu, S. Peng, H. Chao, Y. Wang, and B. Jiang, Object
categorization with sketch representation and generalized samples,
Pattern Recogn., vol. 45, no. 10, pp. 3648-3660, 2012.
[37] S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz,
T. Greer, B. H. Romeny, J. B. Zimmerman, and K. Zuiderveld, Adaptive
histogram equalization and its variations, Comp. Vis. Graph. Image
Process., vol. 39, no. 3, pp. 355-368, Sep. 1987.
[38] J. K. Patel and C. B. Read, Handbook of the normal distribution, New
York: Marcel Dekker, 1982.
[39] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image
quality assessment: from error visibility to structural similarity, IEEE
Trans. Image Process., vol. 13, no. 4, pp. 600-612, Apr. 2004.
[40] J. Bioucas-Dias, and M. Figueiredo, Multiplicative noise removal using
variable splitting and constrained optimization, IEEE Trans. Image
Process., vol. 19, no. 7, pp. 1720-1730, July 2010.
[41] M. Afonso, J. Bioucas-Dias, and M. Figueiredo, Fast image recovery
using variable splitting and constrained optimization, IEEE Trans.
Image Process., vol. 19, no. 9, pp. 2345 C 2356, Sept. 2010.
[42] M. Afonso, J. Bioucas-Dias, and M. Figueiredo, An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems, IEEE Trans. Image Process., vol. 20, no. 3, pp.
681-695, Mar. 2011.
[43] B. Julesz, Textons, the elements of texture perception, and their
interactions, Nature, vol. 290, pp. 91-97, Mar. 1981.
[44] X. Liu and D. Wang, A spectral histogram model for texton modeling
and texture discrimination, Vision Research, vol. 42, no. 23, pp.
2617C2634, Oct. 2002.
[45] J. J. Peissig, M. E. Young, E. A. Wasserman, and I. Biederman, The
role of edges in object recognition by pigeons, Perception, vol. 34, pp.
1353 - 1374, 2005.
[46] C. L. Zitnick and D. Parikh, The role of image understanding in contour
detection, 2012 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 622 - 629, 16-21 June 2012.
[47] B. Zhang, J. M. Fadili, and J. L. Starck, Wavelets, ridgelets, and
curvelets for Poisson noise removal, IEEE Trans. Image Process., vol.
17, no. 7, pp. 1093 - 1108, July 2008.
[48] F. J. Anscombe, The transformation of Poisson, binomial and negativebinomial data, Biometrika, vol. 35, no. 34, pp. 246C254, Dec. 1948.
[49] M. Makitalo and A. Foi, Optimal inversion of the Anscombe transformation in low-count Poisson image denoising, IEEE Trans. Image
Process., vol. 20, no. 1, pp. 99 - 109, Jan. 2011.
[50] L. Zhang, W. Dong, D. Zhang, and G. Shi, Two-stage Image Denoising
by Principal Component Analysis with Local Pixel Grouping, Pattern
Recogn., vol. 43, no. 4, pp. 1531-1549, Apr. 2010.
[51] L. Zhang, L. Zhang, X. Mou, and D. Zhang, FSIM: a feature similarity
index for image quality assessment, IEEE Trans. Image Process., vol.
20, no. 8, pp. 2378C2386, Aug. 2011.
[52] M. Zontak, I. Mosseri, and M. Irani, Separating signal from noise using
patch recurrence across scales, IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), June 2013.
[53] J. A. Snyman, Practical Mathematical Optimization: An Introduction
to Basic Optimization Theory and Classical and New Gradient-Based
Algorithms, Springer, 2005.