Vous êtes sur la page 1sur 66

Masarykova Univerzita

rodov
Pr Fakulta
edecka

Mathematical models of dissolution


Masters thesis

Brno 2009
Jakub Cupera
Declaration

Hereby I declare, that this paper is my original authorial work, which I


have worked out by my own. All sources, references and literature used or
excerpted during elaboration of this work are properly cited and listed in
complete reference to the due source.

Brno, May 4, 2009



Jakub Cupera
Acknowledgement

I would like to thank my advisor doc. RNDr. Petr Lansky CSc. for his
friendly and pleasant attitude, enthusiasm and care, and my parents for
their love and support.
Abstract

We present basic models of dissolution and their stochastic modifications,


including a new model based on the theory of stochastic differential equa-
tions. Theory of the Fisher information is applied on the stochastic models
in order to obtain the optimal times to measure the experimental dissolution
data. Parameters of studied dissolution models are estimated by maximum
likelihood method and appropriate Matlab procedures are presented.
Keywords

Deterministic models of dissolution, stochastic models of dissolution, Fisher


information, Rao-Cramer lower bound, parameter estimation, maximum like-
lihood method.
Contents

Introduction 2

Notation 3

1 Models of dissolution 4
1.1 Deterministic models . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.1 Homogenous model . . . . . . . . . . . . . . . . . . . . 6
1.1.2 Weibull model . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.3 Hixson-Crowell model . . . . . . . . . . . . . . . . . . 9
1.2 Stochastic models . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.1 Gaussian model . . . . . . . . . . . . . . . . . . . . . . 11
1.2.2 Log-normal model . . . . . . . . . . . . . . . . . . . . 14

2 Theoretical background 21
2.1 Fisher information and its lower bound . . . . . . . . . . . . . 21
2.2 Maximum likelihood estimation . . . . . . . . . . . . . . . . . 26

3 Parameter estimation in stochastic models of dissolution 28


3.1 Fisher information in theory of dissolution . . . . . . . . . . . 28
3.1.1 Single-parameter model . . . . . . . . . . . . . . . . . 28
3.1.2 Two-parameter model . . . . . . . . . . . . . . . . . . 31
3.2 Parameters of dissolution models and their maximum likeli-
hood estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.1 Gaussian model . . . . . . . . . . . . . . . . . . . . . . 37
3.2.2 Log-normal model . . . . . . . . . . . . . . . . . . . . 38
3.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3.1 Stochastic homogenous model . . . . . . . . . . . . . . 40
3.3.2 Stochastic Weibull model . . . . . . . . . . . . . . . . . 43
3.3.3 Stochastic Hixson-Crowell model . . . . . . . . . . . . 47

4 Computational procedures and examples 50


4.1 Simulation of random processes . . . . . . . . . . . . . . . . . 50
4.2 Maximum likelihood estimation . . . . . . . . . . . . . . . . . 51

Summary 59

Bibliography 60

1
Introduction

Tablet dissolution testing where the drug release is observed as a function of


time is an essential part of tablet formulation studies. These tests provide
information about the dissolution mechanism and ensure the reproducibility
of drug release, which is important for tablet quality assurance. However,
the dissolution tests are time-consuming and thus optimal planning of exper-
iment and sophisticated methods for its results evaluation would be needed.
Identification of the parameters of specific dissolution models is crucial prob-
lem in dissolution studies. Aim of this thesis is to find the optimal time for
observation of the empirical dissolution data if the parametric form of the
dissolution profile is assumed to be known and the parameters should be
estimated. Our approach is based on stochastic properties of the dissolution
process and the Fisher information concept.
This text is divided into four chapters. The most important results from
the theory of dissolution are introduced in the first chapter, the most com-
mon deterministic models are described and a new stochastic model is pro-
posed. In the second chapter we summarize well known results for theory
of regular densities, Fisher information and maximum likelihood estimation.
Theory of Fisher information is applied on the stochastic models of disso-
lution, and maximum likelihood estimation of their parameters is proposed
in the third chapter. Finally, a brief introduction to Matlab programs for
numerical simulation of random processes and procedures for maximum like-
lihood estimation are presented. Proposed theory is applied on Monte-Carlo
simulated concentration data.
Supplement of this thesis is CD with Matlab procedures and programs.
Their description can be called implicitly with command help name-of-
procedure.

2
Notation

Matrices and fields

Rn n-dimensional real euclidean space


y = (y1 , . . . , yn )T n-dimensional row vector
Rn+ = {y Rn : yi > 0, i = 1, . . . , n}
In unit matrix of size n
J = (Jij )ni,j=1 square matrix of size n
J1 inverse matrix of the matrix J
JT transposed matrix to J
|J| determinant of the square matrix J
J0 denotes that matrix J is positive semi-definite

Random vectors

= (1 , . . . , m ) m-dimensional vector parameter


b = (b1 , . . . , bm ) estimate of the vector parameter
X = (X1 , . . . , Xq )T q-dimensional random vector
x = (x1 , . . . , xq )T realisation of the random vector X
f (x; ) probability density of the random vector X with vec-
tor parameter
EX expected value of the random vector X
varX, varX covariance matrix of the random vector X, resp.
variance of the random variable X.
cov(Xi , Xj ) covariance of two randon variables Xi , Xj . It holds
cov(Xi , Xi ) = varXi
X L random variable X has probability distribution L
N(, 2 ) Gaussian distribution with mean and variance 2
logN(, 2 ) log-normal distribution with parameters and 2

Functions

argmax f (y) = {y0 Y : y Y, f (y0) f (y)}


yY

argmin f (y) = {y0 Y : y Y, f (y0) f (y)}


yY
f () df ()

= d
= f0 () differentiation of f () with respect to

3
Models of dissolution
1
The dissolution is defined as a process of attraction and association of molecu-
les of a solvent with molecules of a solute. Number of these associated parti-
cles is given in moles, where 1 mole contains approximately 6.022045 1023
particles. The concentration C is defined as a number of solvent molecules in
unit volume of solute, mathematically C = n/V , where n is the total number
of dissolved particles and V is the total volume of solute.
The mathematical models of dissolution can be divided into two basic
groups - deterministic and stochastic. Both of them investigate the whole
population of particles and describe the time course of concentration C(t).
While the deterministic models work with given function C(t), the stochastic
ones describe the dissolution as a random process. It is natural to assume
that the concentration C(t) is a nondecreasing function of time. Furthermore,
we assume the function C(t) is continuous and smooth (with derivatives of
all orders), C(0) = 0 and C(t) CS for increasing t, where CS is a limit
concentration after the solvent is dissolved or the solute is saturated. For
comparison among the dissolution profiles, we normalize the profile C(t) to
the form
C(t)
F (t) = , (1.1)
CS
where the function F (t) expresses the dissolved fraction of solvent at the time
instant t. Note that F (t) 1 for increasing t.
Function F (t) satisfies all the conditions to be a cumulative distribution
function (cdf) of a random variable and thus it can be seen as a cdf of a
random variable T representing the time until a randomly selected molecule
enters solution. Due to the assumptions made on C(t), function F (t) is
continuous and smooth with corresponding probability density dF (t)/dt of
the random variable T .

4
An alternative way how to characterize the dissolution is by defining the
fractional dissolution rate
dF (t)
dt
k(t) = . (1.2)
1 F (t)

The quantity k(t)dt can be seen as a conditional probability that a randomly


selected particle of solvent will be dissolved in the interval [t, t + dt) under
the condition that this has not happened up to time t.

1
2
(A)
(B)
0.8
1.5
0.6
F (t)

k(t)
1
0.4

0.5
0.2

0 0
0 1 2 3 0 1 2 3
t t

Fig. 1.1: Different profiles of fractional dissolution rate and corresponding cdfs (A)
Weibull functions (see Section 1.1.2) with scale parameter a = 1 and shape parameters
b = 0.9 (red) and b = 1.1 (black), (B) corresponding fractional dissolution rates.

If the form of the fractional dissolution rate k(t) is known, then the dissolu-
tion profile can be evaluated from equation
 Z t 
F (t) = 1 exp k(s)ds , (1.3)
0

which is a solution of differential equation (1.2) with initial condition F (0) = 0.


Advantage of the fractional dissolution rate is its sensitivity to certain prop-
erties of the dissolution profiles F (t). In Fig. 1.1 we can see that despite the
shapes of F (t) are hardly distinguishable, the shapes of k(t) are apparently
different.

5
1.1. DETERMINISTIC MODELS

A crucial problem in dissolution studies is identification of the dissolution


profile from a measured data. In contrast to the typical problem of statistical
inference we are not able to measure realizations of the random variable T .
What is measurable is concentration C(t) which is related to F (t) by equation
(1.1). So, under the assumption we know or we can measure limit concentra-
tion CS , we can determine empirical values of the dissolved fraction F (t). We
have two basic methods used to fit the measured data. The nonparametric
methods, like kernel smoothing, are descriptive and could give information
about the half dissolution time t1/2 , where F (t1/2 ) = 0.5, or mean dissolution
time ET given by formula
Z
ET = (1 F (t)) dt. (1.4)
0

The parametric methods fit the measured data to certain functions, where
its estimated parameters could give us an information about properties of
the dissolution process and consequently about the random variable T .

1.1 Deterministic models


Every continuous cdf defined on the interval [0, ) potentially characterizes
a model of dissolution. The most common models in theory of dissolution
are described in the following sections.

1.1.1 Homogenous model


The basic deterministic model of dissolution called homogenous, or the first-
order model, is described by differential equation

dC(t)
= a(CS C(t)), C(0) = 0, (1.5)
dt

where a > 0 is constant and CS is the limit concentration achieved after the
solvent is dissolved or the solute is saturated. In this model the rate of disso-
lution, dC(t)/dt, is proportional to the difference between the instantaneous
concentration and the final concentration of solvent. This model was intro-
duced by chemists Noyes and Whitney in [23]. In generalizations of (1.5) the
constant a is often considered to depend on temperature, surface of solvent
etc., for example see section 1.1.3. Solution of differential equation (1.5) is

C(t) = CS (1 eat ), (1.6)

6
1.1. DETERMINISTIC MODELS

thus
F (t) = 1 eat . (1.7)
The fractional dissolution rate defined by equation (1.2) is constant, k(t) a,
for model (1.7). Thus the probability that a selected particle of the solvent
undissolved at the time t will be dissolved in the time interval [t, t + dt), is
constant. This property makes the homogenous model specific among the
whole spectrum of dissolution models.
As seen from equation (1.7), the dissolution never ends in this model, and
thus there is always a small amount of undissolved substance. Practically,
as an end of the dissolution we could consider, for example, the time instant
when 99% of the solvent is dissolved. Examples of the homogenous model
with different values of parameter a are shown in Fig. 1.2.

0.9

0.8

0.7

0.6
F (t)

0.5

0.4

0.3

0.2

0.1

0
0 0.5 1 1.5 2 2.5 3
t

Fig. 1.2: Time course of dissolution profile F (t) of homogenous model (1.7) with param-
eter a = 0.5 (blue), a = 1 (red) and a = 2 (black).

7
1.1. DETERMINISTIC MODELS

1.1.2 Weibull model


Despite the Weibull model is one of the most succesful in fitting the exper-
imental dissolution curves to theoretical functions (see ref. [3],[11],[32]), it
has not been deduced from any physical law. This model was introduced by
Langenbucher in [14] and is described by cdf
b
F (t) = 1 eat , (1.8)
where a > 0 is the scale parameter and b > 0 is the shape parameter. The
fractional dissolution rate of the Weibull model has form
k(t) = abtb1 (1.9)
and is increasing (b > 1), decreasing (b (0, 1)) or constant (b = 1). How the
parameters a and b affect the course of the dissolution profile is illustrated in
Fig. 1.3. Note that for b = 1 this model coincides with homogenous model
(1.7). Similarly to it, the dissolution never ends in this model and there is
always infinitely small amount of undissolved substance. Practically as an
end of the dissolution we could consider again the time instant when 99% of
the solvent is dissolved.
As the Weibull model is only descriptive and has not been deduced from
any fundamental physical law, it has been the subject of some criticism.
Costa and Lobo summarized these arguments in [3], such as lack of any
kinetic background, or that the model has not any single parameter related
to the intrinsic dissolution rate of the solvent.

1 1
(A) (B)
0.8 0.8

0.6 0.6
F (t)

F (t)

0.4 0.4

0.2 0.2

0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2
t t

Fig. 1.3: Time course of dissolution profile F (t) of Weibull model (1.8). (A) Parameter
b fixed, b = 3 and parameter a = 4 (black), a = 1 (red), a = 0.4 (blue). (B) Parameter a
is fixed, a = 1 and parameter b = 0.4 (black), b = 1 (red) and b = 4 (blue).

8
1.1. DETERMINISTIC MODELS

1.1.3 Hixson-Crowell model


From physical principles one can expect that the rate of dissolution depends
on the surface of solvent - the larger is area the faster is dissolution. This
statement can be mathematically expressed as follows
dF (t)
= a0 S(t), F (0) = 0, (1.10)
dt
where S(t) is the surface of the solvent at time instant t, and a0 > 0 is a
constant. Solution of differential equation (1.10) for a sphere solid dosage
form of solvent is presented in following text.
For a sphere with radius r, the surface S = 4r 2 is related to the volume
4 2
3
V = 3 r by formula S = a1 V 3 , where a1 = 3 36. Furthermore the volume
V is linearly proportional to the weight w, thus we can evaluate the surface
area as
2
S(t) = a2 w 3 (t), (1.11)
where w(t) is remaining
p weight of the solvent at the time instant t and
32 3 2
a2 = a1 % = 36% , where % is density of the solvent. The dissolved
fraction F (t) can be written as a function of weight w(t)
w(t)
F (t) = 1 , (1.12)
w0
where w0 = w(0) is an initial amount of the solvent. Substituting (1.11) and
(1.12) into (1.10) gives
dw(t) 2
= a3 w 3 (t), (1.13)
dt
where a3 = a0 a2 w0 , with solution
1 1
w03 w 3 (t) = a4 t, (1.14)
p
where a4 = 3a3 = 3a0 a2 w0 = 3a0 w0 3 36%2 is constant. Dividing equa-
1/3
tion (1.14) by w0 and inserting into (1.12) gives us the time course of the
dissolved fraction
F (t) = 1 (1 at)3 , (1.15)
1 2p

where a = a4 w0 3 = 3a0 w03 3 36%2 is constant. Using the knowledge of
initial radius r0 and density % of the solid dosage form of the solvent, we can
write w0 = 4%r03 /3, thus a = 12a0 r02 . In contrast to the previous models
the dissolution ends at the finite time t = 1/a. This model was introduced
by Hixson and Crowell in [12] and can be generalized to form
(
1 (1 at)b for t [0, a1 ]
F (t) = , (1.16)
1 for t > a1

9
1.2. STOCHASTIC MODELS

where a > 0, b > 0 are constants. These models are called the root laws
because of the form of equation (1.14). How the parameters a, b affect the
shape of F (t) can be seen in Fig. 1.4.

1 1
(A) (B)
0.8 0.8

0.6 0.6
F (t)

F (t)
0.4 0.4

0.2 0.2

0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2
t t

Fig. 1.4: Time course of dissolution profile F (t) of Hixson-Crowell model (1.16) with (A)
fixed parameter b = 0.5, parameter a = 0.3 (black), a = 0.6 (red) and a = 1.2 (blue), (B)
fixed parameter a = 2, parameter b = 0.5 (black), b = 1 (red) and b = 5 (blue).

The fractional dissolution rate of model (1.16) is


(
ab
1at
for t [0, a1 ]
k(t) = , (1.17)
0 for t > a1

and is increasing function of time until the end of the dissolution.


Models (1.7), (1.8) and (1.16) are the most common in the theory of
dissolution. For more information see ref. [3],[6],[17].

1.2 Stochastic models


The models presented in Section 1.1 are stochastic from microscopic point
of view, but macroscopically behave deterministically. It means that under
identical conditions, the dissolution profile remains unchanged. In this Sec-
tion we present stochastic modification of the deterministic models. The ran-
dom component used in modified models disturbs our assumption of strictly
increasing concentration C(t), but it allows us to study stochastic properties
of the models. Interesting discussion on locally decreasing concentration and
physical principle of the dissolution process can be found in ref. [15].

10
1.2. STOCHASTIC MODELS

1.2.1 Gaussian model


Any deterministic model of dissolution described by cdf F (t) does not take
into account random factors influencing the process of dissolution. In order to
obtain a more realistic picture of reality, the stochastic model of dissolution
can take form
(t) = F (t) + s(t)(t), (1.18)
where (t) is a random process called white noise with properties
(
0 s 6= t
E(t) = 0, cov((t), (s)) = , (1.19)
1 s=t

and F (t) is selected deterministic dissolution profile we wish to randomize.


Function s(t) determines an amplitude of the noise at the time instant t. In
applied science, white noise is often taken as a mathematical idealization of
phenomena involving sudden fluctuations of any size; for formal mathemati-
cal treatment see ref. [25]. The simplest example which can be proposed is
to assume that the noise has Gaussian distribution (Gaussian white noise),

(t) N F (t), s2 (t) , (1.20)

with probability density


 
1 (x F (t))2
f (x, t) = p exp , x R. (1.21)
2s2 (t) 2s2 (t)

An example of Gaussian probability densities with mean described by ho-


mogenous model (1.7) and constant variance is shown in Fig. 1.5.
Model (1.18) violates our assumption of continuity of dissolution trajec-
tories. To improve it we should assume model
s(t)
(t) = F (t) + W (t), (1.22)
t
where W (t) is a standard Wiener process given by following definition.
Definition 1.1. A standard Wiener process (or a standard Brownian mo-
tion) is a stochastic process W (t), t 0, with following properties:
1. W (0) = 0,

2. W (t) is continuous with probability 1,

3. process W (t) has independent increments,

11
1.2. STOCHASTIC MODELS

4. W (t + s) W (s) N(0, t) for 0 s < t.

Conditions 1. and 4. of Definition 1.1 give for s = 0

W (t) N(0, t), (1.23)

thus model (1.22) has probability density (1.20). Furthermore, it can be


shown (see ref. [31]) that
Z t
W (t) = (s)ds, (1.24)
0

where (s) is a white noise given by (1.19). Sample paths of random processes
(1.22) and (1.24) are shown in Fig. 1.6 and details about their numerical
simulation are given in Chapter 4.
Our aim is to analyze measured data, thus we assume that the variance
s2 (t) of the data obtained from stochastic model (1.18), resp. (1.22), has
form
s2 (t) = 2 (t) + (t), (1.25)
where 2 (t) is variance of the dissolution and (t) 0 reflects the measure-
ment error. The function 2 (t) is unknown in general. We should assume

1.5
f (x, t)

0.5

0
1
3
0.5 2
1
x 0 0
t

Fig. 1.5: Gaussian probability density (1.21) with mean described by homogenous model
(1.7), parameter a = 1, and constant variance of the data s2 (t) 0.04.

12
1.2. STOCHASTIC MODELS

3
(A) 1 (B)
2
0.8
1
0.6
W (t)

(t)
0
0.4
1
0.2
2
0
3
0 1 2 3 0 1 2 3
t t

Fig. 1.6: Sample paths of random processes. (A) Sample paths of Wiener process W (t),
(B) sample paths of random process (1.22) with mean F (t) = 1 exp(t2 ) and variance
s2 (t) = 0.04 exp(t2 )(1 exp(t2 )).

that 2 (0) = 0 (nothing is dissolved) and 2 (t) tends to zero for increasing t
(everything is dissolved). As an example of such a function we can take

2 (t) = pF (t)(1 F (t)), (1.26)

where F (t) is the dissolution profile and p > 0 is a constant. It can be


easily verified that this function satisfies the assumptions mentioned above.
Furthermore, it has maximum p/4 at time instant t1/2 , where F (t1/2 ) = 1/2.
The measurement error (t) should be proportional to the instantaneous
concentration. Due to Webers law (see ref. [33]) the stimulus (for us con-
centration) correctly discriminated is constant fraction of the stimulus mag-
nitude. So (t), which represents the size of error, is proportional to F (t),
(t) qF (t) + r, where q, r 0 are constants.
Disadvantage of models (1.18) and (1.22) with probability distribution
(1.20) is that they permit observations outside the interval [0, 1] and thus
their modification may appear as useful. The experimental data showing
permanent fluctuations in both directions around 100% were presented e.g.
in [26] or [34], but it is not realistic that the concentration data takes negative
values for t small. To improve the model one may require that the noise
effect becomes asymmetric at the beginning of the dissolution. This could be
achieved by considering noise with different probability distribution instead
of the Gaussian one. Such a model is not proposed here.

13
1.2. STOCHASTIC MODELS

1.2.2 Log-normal model


Lansky and Weiss introduced in [15] another stochastic modification of ho-
mogenous model (1.7). In parallel with these authors let us assume that the
fractional dissolution rate is corrupted by a white noise defined by (1.19),

(t) = k(t) + (t), (1.27)

where > 0 is an amplitude of the noise. From (1.2), after replacing F (t) by
(t) to distinguish between deterministic and stochastic models, we obtain
stochastic differential equation

d(t) = k(t)(1 (t))dt + (1 (t))dW (t), (0) = 0, (1.28)

where W (t) is a standard Wiener process given by Definition (1.1), and


dW (t) = (t)dt we obtain from (1.24). Applying substitution U(t) = 1(t),
dU(t) = d(t), on equation (1.28) gives

dU(t)
= k(t)dt dW (t), U(0) = 1, (1.29)
U(t)

thus Z t Z t
dU(s)
= k(s)ds W (t). (1.30)
0 U(s) 0

To evaluate the integral on the left hand side we have to introduce the It
o
calculus.

Definition 1.2 (1-dimensional Ito process). Let W (t) be a Wiener process.


An (1-dimensional) It
o process is a stochastic process X(t) of the form
Z t Z t
X(t) = X(0) + u(s)ds + v(s)dW (s) (1.31)
0 0

or equivalently
dX(t) = u(t)dt + v(t)dW (t),
where u(t), v(t) are random processes satisfying
Z t 
P |u(s)|ds < , t 0 = 1,
0

and Z t 
2
P v(s) ds < , t 0 = 1.
0

14
1.2. STOCHASTIC MODELS

We are now ready to state the main theorem of Ito calculus.


Theorem 1.1 (1-dimensional Ito formula). Let X(t) be an It o process de-
fined by (1.31). Let g(t, x) be a function of x R and t 0 that is twice
continuously differentiable in x and once continuously differentiable in t, and
let W (t) be a Wiener process. Denote by gt0 (t, x), gx0 (t, x), and gxx
00
(t, x) the
first and second partial derivatives of g(t, x) with respect to the variables t
and x. Then
Y (t) = g(t, X(t))
is again an It
o process, and
1 00
dY (t) = gt0 (t, X(t))dt + gx0 (t, X(t))dX(t) + gxx (t, X(t))[dX(t)]2 , (1.32)
2
where [dX(t)]2 = dX(t)dX(t) is computed according to rules

dtdt = dtdW (t) = dW (t)dt = 0; dW (t)dW (t) = dt. (1.33)

Proof of the theorem can be found in [25]. To evaluate the integral at the
left hand side of (1.30) we use Ito formula (1.32) for the function

g(t, x) = ln x; x>0

and obtain
1 1 1
d(ln U(t)) = dU(t) 2
[dU(t)]2
U(t) 2 U (t)
dU(t) 1
= 2 U 2 (t)dt
U(t) 2U 2 (t)
dU(t) 1 2
= dt.
U(t) 2
Hence
dU(t) 1
= d(ln U(t)) + 2 dt
U(t) 2
so from (1.30) we conclude
Z t
U(t) 1
ln = k(s)ds 2 t W (t)
U(0) 0 2
where U(0) = 1, so
Z t
1
U(t) = exp( k(s)ds 2 t W (t)),
0 2

15
1.2. STOCHASTIC MODELS

and thus Z t
1
(t) = 1 exp( k(s)ds 2 t W (t)). (1.34)
0 2
We wish to obtain probability distribution of the random variable (t).
Due to (1.23), the expression in exponent in (1.34) has Gaussian probability
distribution Z t
1 
N k(s)ds 2 t, 2 t ,
0 2
thus random variable 1 (t) has log-normal distribution (see ref. [21])

1 (t) logN (t), 2 (t) , (1.35)

with probability density


 
1 (ln x (t))2
f (x, t) = p exp , x (0, ), (1.36)
x 2 2 (t) 2 2 (t)

where the coefficients (t), 2 (t) are


Z t
1
(t) = k(s)ds 2 t,
0 2
2 2
(t) = t.

1
(A) 1 (B)
0.8
0.8
0.6
0.6
(t)

(t)

0.4
0.4
0.2
0.2
0
0
0 1 2 3 0 1 2 3
t t

Fig. 1.7: Sample paths of a random process (t) with indicated Weibull dissolution
profile F (t) given by (1.8) with parameters a = 1, b = 2, (A) random process (t) defined
by equation (1.34), = 0.2, (B) random process (t) defined by equation (1.42), = 0.05.

16
1.2. STOCHASTIC MODELS

It holds

E (1 (t)) = 1 E(t), var (1 (t)) = var(t),

and by applying (1.3) on the expressions of mean and variance of a random


process 1 (t) (see ref. [21]) we obtain

E(t) = F (t),

var(t) = (1 F (t))2 exp( 2 t) 1 .

As the function f (x, t) for fixed t gives probability density of the random
variable 1 (t), the random variable (t) has probability density f (1x, t).
Model (1.34) suffers of certain defects. In Fig. 1.7 (A) it can be seen that
this model completely prevents the concentration to take values above 100%,
but it allows them to drop below zero, thus the model does not satisfy our
requirements. In equation (1.28) the random component dW (t) influences
the process of dissolution at its beginning only, because term 1 (t) is large
for t small. To obtain a more precise model one should assume, for example,
stochastic differential equation

d(t) = k(t)(1 (t))dt + (t)dW (t), (1.37)

or
d(t) = k(t)(1 (t))dt + (t)(1 (t))dW (t), (1.38)

1
1 (A) (B)
0.8
0.8

0.6
0.6
(t)

(t)

0.4
0.4

0.2 0.2

0 0
0 1 2 3 0 1 2 3
t t

Fig. 1.8: Sample paths of dissolution profile (t), k(t) = 2t. (A) model (1.37), = 0.1,
(B) model (1.38), = 0.4.

17
1.2. STOCHASTIC MODELS

with initial condition (0) = 0, where the random factor dW (t) does not
influence the process of dissolution at its begin and end. In this case we are
not able to find solution of stochastic differential equations (1.37) and (1.38)
in analytic form. Their numerical approximations (see Section 4.1) can be
seen in Fig. 1.8.
Let us study another model with solution in analytic form. Stochastic dif-
ferential equation (1.28) with solution (1.34) can be rewritten after dividing
by (1 (t))dt to form
d d
dt
(t) dt
F (t)
= + (t), (0) = 0.
1 (t) 1 F (t)
As we have seen, this model does not satisfy our assumptions and thus let
us consider stochastic differential equation
d d
dt
(t) dt
F (t)
= + (t), (0) = 0. (1.39)
(t) F (t)
Let > 0 be small. Firstly we solve equation (1.39) with initial condition
() = F (). This equation can be solved similarly as the equation (1.28),
where after application of Ito theorem we obtain
(t) F (t) 1 2
ln = ln t + W (t)
() F () 2
which can be rewritten as
 
() 1 2
(t) = exp ln F (t) t + W (t) . (1.40)
F () 2
Our purpose is to find limit for 0. As () is a random process, we have
to use stochastic approach.
Definition 1.3. Let X(t) be a random process. We say that X(t) has limit
in the mean X(t0 ) for t t0 if
lim E(X(t) X(t0 ))2 = 0.
tt0

We write
l. i. m. X(t) = X(t0 ).
tt0

Theorem 1.2. Let F (t) be a dissolution profile and (t) be a random process
with properties E(t) = F (t) and var(t) = o(F 2 (t)) for t small. Then
(t)
l. i. m. = 1. (1.41)
t0 F (t)

18
1.2. STOCHASTIC MODELS

Proof. We have

F (t)
 2 z }| {
(t) E(t)2 2 E(t)

lim E 1 = lim 2 + 1
t0 F (t) t0 F (t) F (t)

 
var(t) + (E(t))2
= lim 1
t0 F 2 (t)
var(t)
= lim 2 = 0.
t0 F (t)

Hence (1.40) gives for 0 solution of stochastic differential equation


(1.39) in form  
1 2
(t) = exp ln F (t) t + W (t) . (1.42)
2
This random process has log-normal probability distribution

(t) logN((t), 2 (t)) (1.43)

with coefficients
1
(t) = ln F (t) 2 t,
2
2 (t) = 2 t,

mean and variance have form

E(t) = F (t),

var(t) = F 2 (t) exp( 2 t) 1 .

Here we can see that random process (t) fulfill conditions mentioned in
Theorem 1.2. Model described by stochastic differential equation (1.39) with
analytical solution (1.42) was deduced from model described by (1.28) on
which additional conditions were imposed. Disadvantage of this model is
that its variance, var(t), tends to infinity for large t. On the other hand,
this model prevents the dissolution data to drop bellow zero and permits the
data fluctuation around 100%, as we can see in Fig. 1.7. Furthermore, in
Fig. 1.8 (A) can be seen that sample paths of model (1.37) are similar to
sample paths of solution to stochastic differential equation (1.42) depicted in
Fig. 1.7 (B).

19
1.2. STOCHASTIC MODELS

Model (1.42) satisfies our assumptions and thus we deal in the next with
the generalized model with log-normal probability distribution (1.43). Its
parameters (t), 2 (t) we obtain from equations for mean and variance of
log-normal distribution (see ref.[21])
1 2 (t)
E(t) = e(t)+ 2 = F (t), (1.44)
 2 
2(t)+2 (t)
var(t) = e e (t) 1 = s2 (t), (1.45)

where F (t) is a dissolution profile, and s2 (t) = 2 (t) is a variance of the


dissolution without measurement error. After a short calculation we obtain
1
(t) = ln F (t) 2 (t), (1.46)
 2 
2
s (t)
2 (t) = ln 1 + 2 . (1.47)
F (t)

Disadvantage of all the models presented up to now is that they describe


the dissolution without measurement error. The modified model can be de-
scribed by a random process
p
(t) = (t) + (t)(t), (1.48)
p
where (t)(t) represents the measurement error. It seems natural to as-
sume that the measurement error has Gaussian distribution. The probabil-
ity density of random process given by (1.48) for fixed t is then described
by convolution of corresponding densities of the log-normal and Gaussian
distribution, and cannot be evaluated analytically.
If we consider the measurement error with Gaussian distribution, observa-
tion of negative concentrations at the beginning of the dissolution can appear
again. To prevent this we assume the measurement error with log-normal dis-
tribution. Random process (1.48) with log-normal error is not log-normally
distributed, but it can be reasonably approximated by a new random vari-
able with log-normal distribution (see ref. [10]). Thus we assume that the
original random process (t) has already the measurement error taken into
account and the variance has form var(t) = s2 (t) = 2 (t) + (t).

20
Theoretical background
2
Theoretical results needed in the text are presented in this chapter. It con-
tains basic theory of regular densities, Fisher information and maximum
likelihood estimation, including formulae for its numeric solution. Informa-
tion were taken mostly from ref. [1],[2],[20],[27] where more details can be
found.

2.1 Fisher information and its lower bound


Definition
 2.1. Let Rn be a space of parameters. We say, that set
F = f (x; ) : = (1 , . . . , n )T , x = (x1 , . . . , xm )T Rm is system of
regular densities, if following conditions are satisfied:
(a) is nonempty and open set,
(b) support M = {x Rm : f (x; ) > 0} is independent of ,
(c) finite partial derivative
f (x; )
, i = 1, . . . , n,
i
exists for every x M,
(d) it holds Z
f (x; )
dx = 0, i = 1, . . . , n,
M i
for every ,
(e) finite integral
Z
ln f (x; ) ln f (x; )
Jij = f (x; )dx, i, j = 1, . . . , n (2.1)
M i j
exists for every and matrix J = (Jij )ni,j=1 is positive definite.

21
2.1. FISHER INFORMATION AND ITS LOWER BOUND

Matrix J is called Fisher information matrix. If n = 1 (we deal with a single


parameter) then J is called Fisher information.

For example, if m = 1 and a is a single parameter, then the Fisher


information has form
Z !2
d
d
f (x; )
J= f (x; )dx. (2.2)
M f (x; )

Definition 2.2. Let X = (X1 , . . . , Xm )T be a random vector and f (x; ) be


a regular density. Then random vector
 T
T ln f (X; ) ln f (X; )
U() = (U1 (), . . . , Un ()) = ,..., (2.3)
1 n

is called score vector of the density f (x).

Theorem 2.1. Let f (x) = f (x; ) be a regular density. Then score vector
U = U() of the density f (x) has mean EU = 0 and variance varU = J.

Proof. For every component Ui of the score vector U it holds


Z Z
ln f (x) f (x)
EUi = f (x)dx = dx = 0
M i M i

due to condition (d) of regularity. Second statement follows definition


 of the
Fisher information matrix (2.1) and variance varU = E UUT
Following theorem shows the basic property of the Fisher information.

Theorem 2.2. Let X and Y be two independent random vectors with regular
probability densities fX (x), fY (y) F , and corresponding Fisher information
matrices JX , JY . Then random vector Z = (XT , Y T )T has regular probability
density
fZ (z) = fX (x)fY (y), z = (xT , yT )T , (2.4)
and Fisher information matrix takes form JZ = JX + JY .

Proof. Conditions of regularity (a) (d) can be easily verified for joint prob-
ability density (2.4). For every component JijZ of the Fisher information
matrix JZ in i-th column and j-th row it holds

22
2.1. FISHER INFORMATION AND ITS LOWER BOUND

ZZ
fZ (z; ) fZ (z; ) 1
JijZ = dz
i j fZ (z; )
M M
ZZ
fX (x)fY (y) fX (x)fY (y) 1
= dxdy
i j fX (x)fY (y)
M M
Z Z
fX (x) fX (x) 1 fY (y) fY (y) 1
= dx + dy
M i j fX (x) M i j fY (y)
X 2 ZZ
fX (x) fY (y)
+ dxdy
i,j=1
i j
M M
i6=j
2 Z
X Z
fX (x) fY (y)
= JijX + JijY + dx dy
i,j=1 M i M i
i6=j

= JijX + JijY .

The last equation follows condition (d) of regularity. Thus JZ = JX + JY


and matrix JZ is positive definite.
Theorem 2.2 has important consequence: let Xi for i = 1, . . . , m be
mutually independent random variables with regular probability densities
fi (x; ) and corresponding Fisher information matrices Ji . Then
Pm random

vector X = (X1 , . . . , Xm ) has Fisher information matrix J = i=1 Ji .
The main importance of Fisher information is given by following theorem.
Theorem 2.3. (Rao-Cramer) Let F = {f (x; ) : R} be regular sys-
tem of densities with a single parameter . Let S be an unbiased estimator
of parametric function g(), where ES 2 < for every . Let derivative
g 0 () = dg()/d exists for every and let
Z Z
d d
S(x)f (x; )dx = S(x) f (x; )dx (2.5)
d M M d
Then it holds
[g 0()]2
2
E(S g()) (2.6)
J
for every .
Proof. We have Z
ES = S(x)f (x; )dx = g(),
M

23
2.1. FISHER INFORMATION AND ITS LOWER BOUND

hence condition (2.5) implies


Z
d
S(x) f (x; )dx = g 0 (), (2.7)
M d

and condition (d) of regularity gives


Z
d
g() f (x; )dx = 0. (2.8)
M d

Subtracting equation (2.8) from (2.7) gives


Z d
f (x; )
[S(x) g()] d f (x; )dx = g 0 ()
M f (x; )

and from Cauchy-Schwarz inequality we obtain


Z Z " d
#2
2 f (x; )
[g 0 ()] [S(x) g()]2 f (x; )dx d
f (x; )dx (2.9)
M M f (x; )

which is equivalent to formula (2.6).


If we insert S = and g() = into formula (2.6) we obtain
1
varb , (2.10)
J

where b is an unbiased estimator of a single parameter . Inequality (2.10)


is called Rao-Cramer inequality and term 1/J is called Rao-Cramer (lower)
bound.
A complete knowledge of distribution f (x; ) is needed for evaluation of
the Fisher information (2.2). Even if the distribution is available, calculation
of the Fisher information is often substantially complicated. However for
model with a single parameter we can use the lower bound of the Fisher
information J2 . Inserting S = X and g() = EX into formula (2.6) gives
 2
1 dEX
J2 = J, (2.11)
varX d

where X is a random variable with probability density f (x; ). In contrast


to the Fisher information, value of the lower bound is based only on the first
two moments of a random variable X.

24
2.1. FISHER INFORMATION AND ITS LOWER BOUND

Fisher information (2.2) can be equal to its lower bound (2.11) under
certain conditions. Equality in Cauchy-Schwarz inequality (2.9) appears if
S(x) g() = 0 or if exists such function K() independent of x that
d
d
f (x; )
= K()[S(x) g()], x M.
f (x; )

Let us assume that S(x) g() 6= 0. Then we have

d ln f (x; )
= K()S(x) g()K(). (2.12)
d
Denote Q() and R() functions satisfying

Q0 () = K(), R0 () = g()K(). (2.13)

Then solution of (2.12) is

ln f (x; ) = Q()S(x) R() + H(x),

where H(x) is independent. Denote u(x) = eH(x) and v() = eR() . Then

f (x; ) = u(x)v()eQ()S(x) . (2.14)

This density belongs to the exponential family with one parameter. Only
in this case can Rao-Cramer inequality (2.11) became equality. Note that
functions Q() and R() must fulfil conditions mentioned above.
Theorem 2.3 can be generalized for the case of parameter vector .
Theorem 2.4. Let F = {f (x; ) : Rn } be regular system of den-
sities. Let S = S(X) = (S1 , . . . , Sk )T be an unbiased estimator of the
parametric function g() = (g1 (), . . . , gk ())T , where ESi2 < for every
i = 1, . . . , k and every . Let the partial derivative
gi ()
gij0 () = , i = 1, . . . , k; j = 1, . . . , n
j
exists and let
Z
f ()
Si (x) f ()dx = gij0 (), i = 1, . . . , k; j = 1, . . . , n.
M j

Denote H = (gij0 ())ki=1,j=1


n
. Then it holds

varS HJ1HT . (2.15)

25
2.2. MAXIMUM LIKELIHOOD ESTIMATION

Proof of the theorem can be found in ref. [2], [29]. Inequality A B for
two symmetric matrices A and B means the difference A B is positive-
semidefinite matrix. If n = k > 1 and S = b = (b1 , . . . , bn )T is an unbiased
estimate of g() = (1 , . . . , n )T , then H = In , where In is the identity matrix
of size n, and formula (2.15) takes form

b J1 ,
var (2.16)

b is the covariance matrix of an unbiased estimate


where var b = (b1 , . . . , bn )T .
Matrix inequality (2.15), resp. (2.16) is called Rao-Cramer inequality again.

2.2 Maximum likelihood estimation


Definition 2.3. Let X = (X1 , . . . , Xm )T be a random sample with joint regu-
lar probability density f (x, ), where x Rm , parameter vector Rn
and is convex. Function

L(; x) = f (x, ),

as a function of and fixed x is called likelihood function, and function

l(; x) = ln L(; x) = ln f (x, )

is called log-likelihood function.

Definition 2.4. Let X = (X1 , . . . , Xm )T be a random sample with joint


regular probability density f (x, ), where x Rm , Rn and
is convex. Estimate bM LE of the parameter vector is called maximum
likelihood estimate if it maximizes the likelihood function L(; x) for given
X = x, i.e. it holds
L(bM LE ; X) L(; X) (2.17)
for every .

Note that function of logarithm is monotonically increasing, thus for max-


imum likelihood estimate bM LE holds

bM LE ; X) l(; X)
l( (2.18)

for every .

26
2.2. MAXIMUM LIKELIHOOD ESTIMATION

Density f (x, ) is assumed to be regular, thus we have ensured existence


of the first partial derivatives with respect to every component of parameter
vector . Thus maximum likelihood estimate bM LE of the parameter vector
can be obtained as a solution of system of equations
L(, X)
= 0, i = 1, . . . , n, (2.19)
i
or
l(, X)
= 0, i = 1, . . . , n. (2.20)
i
System (2.20) can be rewritten to form

U() = 0 (2.21)

due to Definition 2.2 of score vector U(). It can be shown (see ref.[1]) that
if equation Z
2 f (x, )
dx = 0, i, j = 1, . . . , n (2.22)
Rm i j
is satisfied for every , then for U0 () (matrix of second partial deriva-
tives of l(; X) with respect to every component of parameter vector ) holds

EU0 () = J,

where J is the Fisher information matrix of probability density f (x, ). This


matrix is positive definite due to Definition 2.1, thus J is negative definite.
Hence, function l(, X) is concave on convex set , and solution bM LE of
system (2.20) exists, is unique and maximizes the likelihood functions L(; X)
and l(; X), see ref.[8].

Iterative methods
The likelihood equations U() = 0 are generally nonlinear with respect to
an unknown parameter , so we can solve them with iterative methods. The
most common methods are Newton-Raphson method
 1
bk =
bk1 U0 (
bk1 ) bk1), k = 1, 2, . . .
U( (2.23)

and Fishers method of scoring

bk1 + (J( k1 ))1 U(


bk = bk1), k = 1, 2, . . . , (2.24)

where matrix U0 () is replaced with its mean EU0 () = J(). Both of


b0 .
these methods require an initial approximation

27
Parameter estimation in stochastic
3
models of dissolution

Theory of Fisher information is applied on the stochastic models of disso-


lution in this chapter. Method for maximum likelihood estimation of their
parameters is proposed and several examples of the Fisher information, resp.
Rao-Cramer bounds, are presented.

3.1 Fisher information in theory of dissolution


In this Section we present methods for evaluation of the Fisher information
for stochastic models of dissolution presented in Section 1.2. It can be easily
verified that their probability densities (1.21) and (1.36) are regular.

3.1.1 Single-parameter model


Formula (2.10) describing Rao-Cramer inequality suggests that larger is the
Fisher information, better estimate of the parameter can be achieved. In
other words, the most precise estimation of the parameter can be obtained
from the data measured at the time instant with the highest Fisher informa-
tion. That time instant we call the optimal time and denote it topt .

Gaussian model
At first we study stochastic model described by equation (1.22) with Gaussian
probability distribution N(F (t), s2 (t)) of the dissolved fraction data, where
F (t) = F (t; a) is a dissolution profile and s2 (t) = s2 (t; a) is given variance.
Both of them contain only a single unknown parameter a characterizing scale
of dissolution. Probability density f (x, t) = f (x, t; a) is given by (1.21) and

28
3.1. FISHER INFORMATION IN THEORY OF DISSOLUTION

its differentiation with respect to parameter a has form


!
df (x, t) (xF (t))2 d   d (x F (t))2
2 12
= e 2s2 (t)
(2s (t)) + f (x, t)
da da da 2s2 (t)
!!
d 2
1 da s (t) d (x F (t))2
= f (x, t) 2
,
2 s (t) da 2s2 (t)

hence
 !
d d d 2
da
f (x, t) 1 d s2 (t) 4(x F (t))( da F (t))s2 (t) 2 s (t) (x F (t))2
= da 2 da
f (x, t) 2 s (t) 4s4 (t)
! !
d 2 d d 2
1 s (t) F (t) s (t)
= x2 da
+x da 2 F (t) da 4
2 s4 (t) s (t) s (t)
| {z } | {z }
=A =B
!
d 2 d d 2
1 2 da s (t) F (t) 1 s (t)
+ F (t) 4 F (t) da 2 da 2
2 s (t) s (t) 2 s (t)
| {z }
=C
= Ax2 + Bx + C

and thus
!2
d
f (x, t)
da
= A2 x4 + 2ABx3 + (2AC + B 2 )x2 + 2BCx + C 2 . (3.1)
f (x, t)

Inserting (3.1) into (2.2) gives, for = a and f (x, t) given by (1.21), equation
Z Z
2 4
J(t) = A x f (x, t)dx + 2AB x3 f (x, t)dx

Z Z
2 2
+(2AC + B ) x f (x, t)dx + 2BC xf (x, t)dx + C 2

 
= A F (t) + 6s (t)F (t) + 3s (t) + 2AB F 3 (t) + 3s2 (t)F (t)
2 4 2 2 4

+(2AC + B 2 ) F 2 (t) + s2 (t) + 2BCF (t) + C 2 , (3.2)

where the last equation follows evaluation of moments for Gaussian distri-
bution with mean F (t) and variance s2 (t), see ref. [22]. After a tedious
calculation we obtain formula describing Fisher information
!2 2
d 2 d
1 da s (t) da
F (t)
J(t) = + . (3.3)
2 s2 (t) s2 (t)

29
3.1. FISHER INFORMATION IN THEORY OF DISSOLUTION

After inserting specific functions F (t) and s2 (t) into (3.3) we can find the
optimal time topt , where
topt = argmax J(t). (3.4)
t(0,)

We have seen that computation of Fisher information J(t) is complicated,


thus in following examples is not given, however, the approach is similar.

Log-normal model
Let us investigate stochastic model of dissolution described by formula (1.42)
with log-normal distribution logN((t), 2 (t)) of the dissolved fraction data.
Parametric functions (t) = (t; a) and 2 (t) = 2 (t; a) are defined by (1.46)
and (1.47), and contain only a single unknown parameter a characterizing
scale of dissolution. Inserting probability density (1.36) into formula (2.2),
= a, gives with similar approach as in the previous case
!2 2
d 2 d
1 da (t) (t)
J(t) = + da 2 , (3.5)
2 2 (t) (t)

After inserting specific functions F (t) and s2 (t) into (1.46), (1.47) and conse-
quently into (3.5) we can find the optimal time topt as a solution of (3.4). One
can see that formula (3.3) with parameters of Gaussian distribution (1.20)
is similar to formula (3.5) with parameters of log-normal distribution (1.43)
given by (1.46) and (1.47).

Multiple measurements
If we can measure at m 2 time instants given by vector t = (t1 , . . . , tm )T ,
where ti > 0 for i = 1, . . . , m, and the measurements can be considered
to be mutually independent, then the Fisher information J(t) has due to
Theorem 2.2 form m
X
J(t) = J(ti ). (3.6)
i=1

Thus if the optimal time topt exists, and such index i {1, . . . , m} that
ti 6= topt exists, then it holds
m
X m
X
J(t) = J(ti ) J(topt ) = mJ(topt ).
i=1 i=1

It means that in this case it is better to measure m data at the optimal


time topt than to measure data at m different time instants spreaded over an
interval, as it is usually done in dissolution experiments, see ref. [19],[26],[34].

30
3.1. FISHER INFORMATION IN THEORY OF DISSOLUTION

Lower bound of the Fisher information


Lower bound (2.11) of the Fisher information is invariant with respect to a
probability distribution, hence it exists and is the same for stochastic models
(t) given by (1.22) and (1.35). Both models satisfy equations

E(t) = F (t) = F (t; a), var(t) = s2 (t) = s2 (t; a),

where a is a single parameter of dissolution. Hence, from (2.11) by applying


= a, we obtain 2
d
F (t)
J2 (t) = da 2 . (3.7)
s (t)
After inserting specific functions F (t) and s2 (t) into (3.7) we get an approx-
imation of the optimal time topt as the time instant of J2 (t) maxima.
Lower bound of the Fisher information J2 (t) coincides with Fisher infor-
d 2
mation (3.3) if da s (t) 0, which means that variance s2 (t) is independent
of the single parameter a. In this case density (1.21) has exponential family
form (2.14) and satisfies required conditions (2.13).

3.1.2 Two-parameter model


A different approach has to be used if the model contains two unknown
parameters. In this case we deal with Fisher information matrix J(t) given
by definition (2.1). We assume that the data can be measured at m 1 time
instants given by vector t = (t1 , . . . , tm )T , ti > 0 for i = 1, . . . , m and the
measurements are mutually independent.
Formula (2.16) suggests that the parameter vector = (a, b)T has es-
timate with minimal variance if the data are measured at such time in-
stants, when the diagonal components of the inverse Fisher information ma-
trix J1 (t) reach their minima. These time instants we call optimal times
and denote them topt.a for parameter a, resp. topt.b for parameter b. The
diagonal elements of J1 (t) we call Rao-Cramer bounds again, in reference
to definition of Rao-Cramer bound for a single parameter model given by
(2.10). In contrast to single-parameter case we need to find them directly.
Firstly we need to investigate the Fisher information matrix in the case
we can make only a single measurement at a single time instant (m = 1).
Estimate of both parameters based on such measurements may not exist, but
results of following Sections are needed for further calculations.

31
3.1. FISHER INFORMATION IN THEORY OF DISSOLUTION

Single measurement: Gaussian model


Let us consider Gaussian stochastic model (1.22) with mean F (t) = F (t; a, b)
and variance s2 (t) = s2 (t; a, b) containing two parameters a and b. At first
we calculate Fisher information matrix J(t), where each of its component
Jij (t) is given by formula (2.1), = (a, b)T and probability density f (x; ) is
defined by (1.21). With similar approach as in one-parameter case we obtain
components of matrix
 2 0 2 20 20

1 sa (t) (Fa0 (t))2 1 sa (t)sb (t) Fa0 (t)Fb0 (t)
2 s2 (t)
+ s2 (t) 2 s4 (t)
+ s2 (t)
J(t) = 2 0 2 0 0 (t)
 2 0 2 0 2
. (3.8)
s
1 a (t)s (t) F a (t)F 1 s (t) (F b (t))
2 4
s (t)
b
+ 2
s (t)
b
2
b
2
s (t)
+ 2
s (t)

Determinant of the matrix is


2
[s2a 0 (t)Fb0 (t) s2b 0 (t)Fa0 (t)]
|J(t)| = 0. (3.9)
2s6 (t)
If the determinant is nonzero, the inverse matrix J1 (t) exists and has form
 2 0 2
1 sb (t) (Fb0 (t))2 20 20
1 sa (t)sb (t) Fa0 (t)Fb0 (t)
1 2 s (t) 2 + 2
s (t)
2 4
s (t)
2
s (t)
J1 (t) =  2 0 2 .
2 0 (t)s2 0 (t)
|J(t)| 1 a b a b
s F 0 (t)F 0 (t)
1 s a (t)
+ (F a
0 (t))2

2 s4 (t) s2 (t) 2 s2 (t) s2 (t)


(3.10)
From inequality (2.16) we obtain two optimal times topt.a and topt.b as the
time instants of the diagonal elements of (3.10) minima,
1 1
topt.a = argmin J11 (t), topt.b = argmin J22 (t). (3.11)
t(0,) t(0,)

At the time instant topt.a we should obtain the best estimate of parameter a,
but not the best estimate of parameter b and vice versa.
Determinant (3.9) of Fisher information matrix (3.8) can be zero, espe-
cially if the variance s2 (t) is parameter independent. The case of a singular
Fisher information matrix represents a significant complication for the the-
ory of the Rao-Cramer lower bound and is usually handled by resorting to
the pseudoinverse of the Fisher matrix, see ref. [29]. It has been shown
there, that the Rao-Cramer lower bound does not exist for estimation prob-
lems with singular Fisher information matrix, and thus we can not find the
optimal times.
Dissolution profile F (t) is given, thus singularity of Fisher information
matrix (3.8) depends on form of the function s2 (t). For example, the deter-
minant is zero if variance of measured data has form
N
X
s2 (t) = i F i (t), (3.12)
i=0

32
3.1. FISHER INFORMATION IN THEORY OF DISSOLUTION

where i R for i = 1, . . . , N. It holds

s2a 0 (t) = Fa0 (t)P (t), s2b 0 (t) = Fb0 (t)P (t),

where
N
X
P (t) = ii F i1 (t), (3.13)
i=1

thus
s2a 0 (t)Fb0 (t) s2b 0 (t)Fa0 (t) 0 (3.14)
which gives |J(t)| 0 for variance s2 (t) given by (3.12). Hence, we are not
able to find the optimal times especially if the variance takes form

s2 (t) = pF (t)(1 F (t)) + qF (t) + r (3.15)

as proposed in Section 1.2.1.

Single measurement: Log-normal model


For log-normal stochastic model (1.43) with parameters (t) = (t; a, b) and
2 (t) = 2 (t; a, b) defined by (1.46) and (1.47), the approach is similar. Fisher
information matrix for parameter vector = (a, b)T and probability density
f (x; ) given by (1.36) has form
 2 20 20

1 a2 0 (t) (0a (t))2 1 a (t)b (t) 0a (t)0b (t)
2 2 (t)
+ 2 (t) 2 4
(t)
+ 2 (t)
J(t) = 20 20
 2 (3.16)
1 a (t)b (t) a (t)0b (t) 1 b2 0 (t) (0b (t))2
2 4 (t)
+ 2 (t) 2 2 (t)
+ 2 (t)

with determinant
2
[ 2 0 (t)0b (t) b2 0 (t)0a (t)]
|J(t)| = a 0. (3.17)
2 6 (t)

If the determinant is nonzero, the inverse Fisher information matrix has form
 2 0 2
1 b (t) (0b (t))2 20 20
1 a (t)b (t) 0a (t)0b (t)
1 2 2 (t) + 2 (t) 2 4 (t) 2 (t)
.
J1 (t) = 2 2
 2
|J(t)| 1 a (t)b (t) a (t)b (t)
0 0 0 0
1 a2 0 (t)
+ (0a (t))2
2 4 (t) 2 (t) 2 2 (t) 2 (t)
(3.18)
The optimal times topt.a and topt.b correspond to the argument minima of the
diagonal functions of matrix J1 (t) as given by (3.11). Again, if the Fisher
information matrix is singular, we can not find the Rao-Cramer bounds nor

33
3.1. FISHER INFORMATION IN THEORY OF DISSOLUTION

the optimal times. Similarly as in the previous case it can be shown that
determinant (3.17) is zero if
N
X
2
(t) = i i (t), (3.19)
i=0

where i R. Functions (t) and 2 (t) depends on functions F (t) and


s2 (t), hence value of determinant (3.17) of two-parametric log-normal model
depends on their form. We show that the determinant is zero if variance
s2 (t) has form (3.12). Differentiation of (1.46) and (1.47) with respect to
parameters a and b gives

a2 0 (t) = s2a 0 (t)P (t) Fa0 (t)Q(t), b2 0 (t) = s2b 0 (t)P (t) Fb0 (t)Q(t)
1 1
0a (t) = Fa0 (t)R(t) s2a 0 (t)P (t), 0b (t) = Fb0 (t)R(t) s2b 0 (t)P (t),
2 2
where
F (t) 2s2 (t)
P (t) = , Q(t) = ,
F (t)(F (t) + s2 (t))
2 F (t)(F 2(t) + s2 (t))
1 1
R(t) = Q(t) + .
2 F (t)

It holds

a2 0 (t)0b (t) = Fa0 (t)s2b 0 (t)P (t)R(t) Fa0 (t)Fb0 (t)Q(t)R(t)


1 20 1
sa (t)s2b 0 (t)P 2 (t) + Fb0 s2a 0 (t)P (t)Q(t)
2 2
20 20
b (t)a (t) = Fb (t)sa (t)P (t)R(t) Fa0 (t)Fb0 (t)Q(t)R(t)
0 0

1 20 1
sa (t)s2b 0 (t)P 2 (t) + Fa0 s2b 0 (t)P (t)Q(t),
2 2
and
 
1 
a2 0 (t)0b (t)b2 0 (t)0a (t)
= Fa0 (t)s2b 0 (t)
Fb0 (t)s2a 0 (t)
P (t)R(t) P (t)Q(t) .
2
(3.20)
2
For variance s (t) given by (3.12) holds (3.14), hence |J(t)| 0 for Fisher
information matrix (3.16).

34
3.1. FISHER INFORMATION IN THEORY OF DISSOLUTION

Multiple measurements
If the measurements can be taken at m 2 time instants given by vector
t = (t1 , . . . , tm )T , ti > 0 for i = 1, . . . , m, and the dissolution data are mu-
tually independent, then the Fisher information matrix of parameter vector
= (a, b)T takes due to Theorem 2.2 form
m
X
J(t) = J(ti ), (3.21)
i=1

where J(ti ) is the Fisher information matrix of a single measurement studied


previously. If determinant of matrix (3.21) is nonzero, then we can find
the inverse matrix J1 (t) and obtain the optimal times as the argument of
diagonal elements minima,
1 1
topt.a = argmin J11 (t), topt.b = argmin J22 (t). (3.22)
tRm
+ tRm
+

For example, let us have m = 2 independent measurements, t = (t1 , t2 )T .


Let Jjk (t) be component of Fisher information matrix J(t) in j-th column
and k-th row, j, k {1, 2}. Then (3.21) has form
 
J11 (t1 ) + J11 (t2 ) J12 (t1 ) + J12 (t2 )
J(t) = (3.23)
J12 (t1 ) + J12 (t2 ) J22 (t1 ) + J22 (t2 )

and for its determinant holds

|J(t)| = (J11 (t1 ) + J11 (t2 ))(J22 (t1 ) + J22 (t2 )) (J12 (t1 ) + J12 (t2 ))2
 2
  2

= J11 (t1 )J22 (t1 ) J12 (t1 ) + J11 (t2 )J22 (t2 ) J12 (t2 ) +
+J11 (t1 )J22 (t2 ) + J11 (t2 )J22 (t1 ) 2J12 (t1 )J12 (t2 )
= |J(t1 )| + |J(t2 )| +
+J11 (t1 )J22 (t2 ) + J11 (t2 )J22 (t1 ) 2J12 (t1 )J12 (t2 ). (3.24)

If the determinant is not zero, then the inverse Fisher information matrix
has form
 
1 1 J22 (t1 ) + J22 (t2 ) J12 (t1 ) J12 (t2 )
J (t) = (3.25)
|J(t)| J12 (t1 ) J12 (t2 ) J11 (t1 ) + J11 (t2 )

It can be easily verified that for its diagonal elements holds

Jii1 (t1 , t2 ) = Jii1 (t2 , t1 ), i = 1, 2,

35
3.1. FISHER INFORMATION IN THEORY OF DISSOLUTION

(1) (2)
thus if the optimal time (topt , topt )T of any of the parameter exists, then
(2) (1)
(topt , topt )T is the optimal time too.
If Fisher information matrix J(t) for a single measurement is singular,
then the Fisher information matrix (3.23) does not necessarily need to be
singular too. We show it for Gaussian model (1.22) with variance s2 (t) given
by (3.12). For log-normal model (1.43) the approach is similar. Components
of Fisher information matrix J(t) has form
 2
1 Fa0 (t)P (t) [Fa0 (t)]2
J11 (t) = + ,
2 s2 (t) s2 (t)
 2
1 Fb0 (t)P (t) [Fb0 (t)]2
J22 (t) = + ,
2 s2 (t) s2 (t)
1 Fa0 (t)Fb0 (t)P 2 (t) Fa0 (t)Fb0 (t)
J12 (t) = + ,
2 s4 (t) s2 (t)

where F (t) is a two-parametric dissolution profile, s2 (t) is given by (3.12)


and P (t) is given by (3.13). In previous Sections we showed that Fisher
information matrix J(t) of a single measurement has determinant equal zero
for Gaussian model (1.22) with variance (3.12). Hence, from (3.24) we obtain
after a short calculation
[Fa0 (t1 )Fb0 (t2 ) Fa0 (t2 )Fb0 (t1 )]2  
|J(t)| = 2 2
P 2 (t1 ) + 2s2 (t1 ) P 2 (t2 ) + 2s2 (t2 ) .
4s (t1 )s (t2 )

Here we can see that Fisher information matrix (3.23) of Gaussian model
(1.22) with variance (3.12) is singular if

Fa0 (t1 )Fb0 (t2 ) Fa0 (t2 )Fb0 (t1 ) 0, (3.26)

Furthermore, it can be seen that matrix (3.23) is singular for t1 = t2 . In


light of these facts it seems that singularity of the Fisher information matrix
points on our unability of two parameter estimation in case of measurements
made at a single time instant. As we have seen, adding another time instant
can solve the problem.
It can be easily verified that (3.26) does not appear for two-parametric
dissolution profiles F (t) studied in the first chapter.

36
3.2. PARAMETERS OF DISSOLUTION MODELS AND THEIR
MAXIMUM LIKELIHOOD ESTIMATION

3.2 Parameters of dissolution models and their


maximum likelihood estimation
Probability distribution of the dissolution data is assumed to be known, thus
we can apply maximum likelihood method for estimation of the parameters.
Vector C = (C1 , . . . , Cm )T contains m 1 mutually independent dissolved
fraction data measured at the time instants given by vector t = (t1 , . . . , tm )T ,
we do not exclude ti = tj for i 6= j. We assume that the parametric form
of the dissolution profile F (t) = F (t; ) and variance s2 (t) = s2 (t; ) are
known. The parameter vector can be, for example, = a, resp. = (a, b)T
for models investigated in Section 3.1. In this Section, method of maximum
likelihood estimation is applied on the stochastic models of dissolution with
Gaussian and log-normal probability distribution studied in Section 1.2. It
can be easily verified that their probability densities (1.21) and (1.36) satisfies
equation (2.22).

3.2.1 Gaussian model


Let us assume the dissolution is described by stochastic model (1.22) with
Gaussian distribution N(F (t), s2 (t)), where F (t) = F (t; ) and s2 (t) = s2 (t; )
depends on vector parameter Rn . The log-likelihood function
l() = l(, C) has form
m
Y m
X
l() = ln fi () = ln fi (), (3.27)
i=1 i=1

where m is number of observations and


!
1 (Ci F (ti ))2
fi () = p exp . (3.28)
2s2 (ti ) 2s2 (ti )

It holds
1  (Ci F (ti ))2
ln fi () = ln 2s2 (ti )
2 2s2 (ti )
hence
F (ti ) 2 s (ti ) 2
2 2
1 j s (ti ) 2 j s (ti )(Ci F (ti )) + j (Ci F (ti ))
ln fi () = +
j 2 s2 (ti ) 2s4 (ti )
2  
1 j s (ti ) (Ci F (ti ))2 F (ti ) Ci F (ti )
= 2 2
1 +
2 s (ti ) s (ti ) j s2 (ti )

37
3.2. PARAMETERS OF DISSOLUTION MODELS AND THEIR
MAXIMUM LIKELIHOOD ESTIMATION

for j = 1, . . . , n. Thus, system of likelihood equations (2.20) takes form


m 2   X m
1 X j s (ti ) (Ci F (ti ))2 F (ti ) Ci F (ti )
2 2
1 + =0 (3.29)
2 i=1 s (ti ) s (ti ) i=1
j s2 (ti )

for j = 1, . . . , n, where n is number of parameters. Term at the left hand side


of (3.29) is equal to the component Uj of the score vector U = (U1 , . . . , Un )T .
Newton-Raphson (2.23) or Fishers score iterative method (2.24) now can be
applied. Fisher information matrix used in (2.24) has due to Theorem 2.2
form m
X
J() = J(ti ; ), ti t, (3.30)
i=1

where J(ti ; ) = J(ti ), i = 1, . . . , m, are Fisher information matrices of pa-


rameter vector in case of a single measurement made at the time instant
ti and m is number of observations.
Note that if variance s2 (t) is constant, then solution of maximum likeli-
hood equations (3.29) coincides with solution obtained with the least-square
estimation method. To be more specific, solution of least-square minimaliza-
tion problem
Xm
b = min
(Ci F (ti ))2 (3.31)

i=1

leads to system of equations


m
X F (ti )
(Ci F (ti )) = 0, j = 1, . . . , n, (3.32)
i=1
j

where m is number of observations and n is number of parameters. If we


insert constant variance s2 (t) c > 0 into equation (3.29) we obtain (3.32).

3.2.2 Log-normal model


In contrast to previous part we assume that the dissolution is described
by stochastic model with log-normal distribution logN((t), 2 (t)), where
(t) = (t; ) and 2 (t) = 2 (t; ) are known functions described by equa-
tions (1.46) and (1.47); Rn is vector of parameters. The log-
likelihood function l() = l(, C) has form (3.27), where
!
1 (ln Ci (ti ))2
fi () = p exp . (3.33)
Ci 2 2 (ti ) 2 2 (ti )

38
3.2. PARAMETERS OF DISSOLUTION MODELS AND THEIR
MAXIMUM LIKELIHOOD ESTIMATION

The system of likelihood equations is obtained similarly as in the previous


case and has form
m 2   X m
1 X j (ti ) (ln Ci (ti ))2 (ti ) ln Ci (ti )
1 + = 0 (3.34)
2 i=1 2 (ti ) 2 (ti ) i=1
j 2 (t )
i

for j = 1, . . . , n, where m is number of observations and n is number of


parameters. Term at the left hand side of (3.34) is equal to the component
Uj of score vector U = (U1 , . . . , Un )T . It can be inserted into (2.23) or (2.24)
to obtain numeric solution of the likelihood equations. Fisher information
matrix used in (2.24) has due to Theorem 2.2 form (3.30).

39
3.3. EXAMPLES

3.3 Examples
3.3.1 Stochastic homogenous model
As an example of stochastic model with single parameter = a we take
stochastic homogenous model with mean specified by (1.7). Form of its
variance s2 (t) influences substantially time course of the Fisher information,
thus we introduce several examples to illustrate this fact. Analytical form of
the Fisher information for log-normal model (3.5) is complicated in all cases
and is not given here.

Example 1
The simplest example we can take is stochastic homogenous model with
constant variance,
s2 (t) r, r > 0. (3.35)
Fisher information of Gaussian model (1.22) takes, after substitution (1.7)
and (3.35) into formula (3.3), form

t2 e2at
J(t) = J2 (t) = . (3.36)
r
For variance given by (3.35), Fisher information (3.36) satisfies J(0) = 0 and
limt J(t) = 0. Optimal time for this model is obtained from (3.4) and has
analytic form topt = 1/a. Time courses of Fisher information J(t) given by
(3.3) and (3.5) for Gaussian and log-normal models are shown in Fig. 3.1.

Example 2
The stochastic homogenous model with nonconstant variance

s2 (t) = peat 1 eat , p > 0, (3.37)

is another example we consider. It holds s2 (0) = 0 and limt s2 (t) = 0.


Fisher information of Gaussian model (1.22) and its lower bound (2.11) have
form
 2
1 2 1 2eat t2 eat
J(t) = t + , (3.38)
2 1 eat p(1 eat )
t2 eat
J2 (t) = . (3.39)
p(1 eat )

40
3.3. EXAMPLES

These functions are plotted together with Fisher information (3.5) of log-
normal model in Fig. 3.1. There it can be seen that functions J(t) tend to
infinity for increasing t in this example. This is caused by formal continuation
of dissolution in model (1.7) for large t while variance s2 (t) tends to zero. In
this case Fisher information J(t) gives no optimal time according to (3.4).
We have a natural requirement that no information about the dissolution
process can be obtained when it is finished. Example of Fisher information
with variance (3.37) contradicts that assumption, hence the model cannot be
accepted.

Example 3
In the last example we take into account the measurement error. Variance
(3.37) of the stochastic homogenous model can be, for example, modified to
form 
s2 (t) = peat 1 eat + q(1 eat ), p > 0, q > 0. (3.40)
Analytic form of the Fisher information is complicated and is not given here.
Lower bound of the Fisher information has form
t2 e2at
J2 (t) = . (3.41)
(1 eat )(peat + q)

Example of this function together with time courses of Fisher information


(3.3), resp. (3.5) of Gaussian, resp. log-normal model are given in Fig. 3.1.
It holds J(0) = 0 and limt J(t) = 0, thus our requirements for the Fisher
information are satisfied for variance given by (3.40). All the optimal times
can be found with appropriate numeric method.
If we take into account variance (3.37) with constant measurement error

s2 (t) = peat 1 eat + r, (3.42)

where p > 0, r > 0 are constants, then the course of the Fisher information is
very similar to the one with variance (3.40) depicted in Fig. 3.1. We can see
that for behavior of the Fisher information of stochastic homogenous model
is crucial variance which does not tend to zero for increasing t.

41
3.3. EXAMPLES

15 (B)
0.03 (A)
0.025
10
0.02

J(t)
s2 (t)

0.015

0.01 5

0.005

0 0
0 1 2 3 4 0 1 2 3 4
t t
12 7
(C) (D)
6
10
5
8
4
J(t)
J(t)

6
3
4
2
2 1

0 0
0 1 2 3 4 0 1 2 3 4
t t

Fig. 3.1: Dependence of the Fisher information on different forms of variance s2 (t)
for stochastic homogenous model with mean (1.7), a = 1. (A) s2 (t) given by (3.35),
r = 0.015 (black), s2 (t) given by (3.37), p = 0.1 (red) and s2 (t) given by (3.40), p = 0.1
and q = 0.01, (B) Fisher information of homogenous Gaussian model (3.36) (black) and
log-normal model (3.5) (red) corresponding to variance (3.35). The optimal times are
topt = 1 for Gaussian model and topt = 0.935 for log-normal model, (C) Fisher information
of Gaussian model (3.38) (black), log-normal model (3.5) (red) and lower bound (3.39)
(blue) corresponding to variance (3.37). Lower bound J2 (t) has maximum at t = 1.593
(D) Fisher information of Gaussian model (3.3) (black), log-normal model (red) and lower
bound (3.41) (blue) corresponding to variance (3.40). The optimal times are topt = 1.266
for Gaussian model, topt = 1.163 for log-normal model. Lower bound has maximum at
t = 1.186.

42
3.3. EXAMPLES

3.3.2 Stochastic Weibull model


As an example of model with two parameters we take stochastic Weibull
model with mean described by (1.8). In the next examples we see how its
variance s2 (t) influences form of the Rao-Cramer bounds for both parame-
ters in the case of single measurement. We have seen that for variance (3.15)
the Fisher information matrix is singular, and thus we have to find a dif-
ferent function that satisfies our requirements. Examples of such functions
presented in following are shown in Fig. 3.2. Analytical form of the inverse
Fisher information matrices described in Section 3.1.2 is complicated in all
examples and is not given here.

0.06
(A) (B)
0.05 0.04

0.04 0.03
s2 (t)

s2 (t)
0.03
0.02
0.02
0.01
0.01

0 0
0 1 2 3 0 1 2 3
t t

Fig. 3.2: Different functions of variance s2 (t). (A) s2 (t) corresponding to Weibull model
(1.8) with parameters a = 1, b = 2 of the form (3.43), p = 0.1 (black), variance (3.44),
p = 0.1, r = 0.02 (blue) and variance (3.45), p = 0.1, q = 0.02 (red). (B) Variance
corresponding to Hixson-Crowell model (1.16), a = 0.5, b = 2, of the form (3.35), r = 0.02
(black), variance (3.46), p = 0.1 (blue) and variance (3.47), p = 0.1, r = 0.015 (red).

Example 1
As the first example of stochastic Weibull model with variance that gives
nonzero determinant of the Fisher information matrices described in Section
3.1.2 we take function
b
s2 (t) = ptb (1 F (t)) = ptb eat , p>0 (3.43)

where F (t) is the dissolution profile of Weibull model described by formula


(1.8) and a, b are its parameters. Component tb in (3.43) ensures that de-
terminants of the Fisher information matrices of Gaussian and log-normal

43
3.3. EXAMPLES

stochastic models are nonzero. It can be easily verified that (3.43) satis-
fies our assumptions about the variance of the dissolution given in Section
1.2.1. Inserting F (t) and s2 (t) into (3.10) and (3.18) gives the inverse Fisher
information matrices of Gaussian and log-normal model. Time courses of
1 1
diagonal functions J11 (t) and J22 (t) are shown in Fig. 3.3.
1
There the discontinuity of function J22 (t) at the time instant t = 1 can be
seen, which is due to presence of logarithm in denominator of the function.
Despite the one-parameter model had increasing Fisher information, resp.
decreasing Rao-Cramer bound if its variance tended to zero for increasing t,
this does not hold for the two-parameter model.
For model with Gaussian distribution, the optimal time to measure for
parameter a can be evaluated from equation (3.11) and has analytic form
topt.a = a1/b . The parameter b has optimal time to measure topt.b = 0 which
implies that for the best estimate of parameter b we have to measure when
nothing, or only very small amount of a solvent is dissolved. This is analogous
to previous examples of single-parameter model, where the models without
measurement errors had the optimal time at the infinity, i.e. when everything
is dissolved.
For model with log-normal distribution, the optimal time of the param-
eter a is close to the optimal time of the Gaussian model, as one can see in
Fig.3.3. In contrast to the Gaussian model, the optimal time for estimation of

10 100
(A) (B)
80
8

60
(t)

(t)

6
1

1
J11

J22

40

4
20

2 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5
t t

Fig. 3.3: Rao-Cramer bounds for stochastic Weibull model with mean (1.8) and variance
(3.43), a = 1, b = 2, p = 0.1. Rao-Cramer bound for (A) parameter a and (B) parameter
b of Gaussian (black) and log-normal (red) stochastic models. The optimal times are
topt.a = 1, topt.b = 0 (Gaussian model) and topt.a = 1.042, topt.b = 0.259 (log-normal
model).

44
3.3. EXAMPLES

the parameter b is positive. In the example depicted in the Fig.3.3 (B) the op-
timal time topt.b = 0.259. But for given parameters it holds F (topt.b ) = 0.065,
which can be interpreted as the best time for measure is when only 6.5% of
the solvent is dissolved.

Example 2
As the second example we take stochastic Weibull model with variance (3.43)
with added constant measurement error,
b
s2 (t) = ptb eat + r, p > 0, r > 0, (3.44)

where a, b are parameters of Weibull model (1.8). Inserting F (t) and s2 (t)
into (3.10) and (3.18) gives the inverse Fisher information matrices of Gaus-
1
sian and log-normal model. Time courses of their diagonal functions J11 (t)
1
and J22 (t) are shown in Fig. 3.3. Discontinuity of the Rao-Cramer bound of
the parameter b at the time instant t = 1 can be seen there due to presence of
1
logarithm in denominator of function J22 (t). The optimal times to measure
can be obtained from (3.11). For a model with Gaussian distribution, adding
a constant measurement error to variance (3.43) has no effect to value of the
optimal time topt.a , but the optimal time topt.b is no longer zero.

40 200
(A) (B)
35

30 150
(t)
(t)

25
1
1

100
J22
J11

20

15
50
10

5
0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5
t t

Fig. 3.4: Rao-Cramer bounds for stochastic Weibull model with mean (1.8) and variance
(3.44), a = 1, b = 2, p = 0.1, r = 0.02. Rao-Cramer bound for (A) parameter a and (B)
parameter b of Gaussian (black) and log-normal (red) stochastic models. The optimal
times are topt.a = 1, topt.b = 0.382 (Gaussian model) and topt.a = 1.077, topt.b = 0.533
(log-normal model).

45
3.3. EXAMPLES

Example 3
As the last example we take stochastic Weibull model with variance
b
 b

s2 (t) = ptb eat + q 1 eat , p > 0, q > 0, (3.45)

where a and b are parameters of Weibull model (1.8). Rao-Cramer bounds of


the Gaussian and the log-normal stochastic models are shown in Fig.3.5. At
1
the time instant t = 1 discontinuity of J22 (t) appears again. The Gaussian
model has the optimal time topt.b = 0.056, where F (topt.b ) = 0.003. This can
be interpreted that the best time to measure is when 0.3% of the solvent is
dissolved. For log-normal model it holds topt.b = 0.25 and F (topt.b ) = 0.06. As
one can see, in this example with variance (3.45), the optimal times for the
parameter b corresponds with very small values of dissolved fraction. As we
have seen in the previous example, this can be solved by adding a constant
to variance s2 (t).

40 200
(A) (B)
35

30 150

25
(t)
(t)

100
1
1

J22
J11

20

15
50
10

5
0
0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5
t t

Fig. 3.5: Rao-Cramer bounds for stochastic Weibull model with mean (1.8) and variance
(3.45), a = 1, b = 2, p = 0.1, q = 0.02. Rao-Cramer bound for (A) parameter a and (B)
parameter b of Gaussian (black) and log-normal (red) stochastic models. The optimal
times are topt.a = 0.719, topt.b = 0.056 (Gaussian model) and topt.a = 0.862, topt.b = 0.25
(log-normal model).

46
3.3. EXAMPLES

3.3.3 Stochastic Hixson-Crowell model


In this Section we investigate properties of the Rao-Cramer bounds of Gaus-
sian model (1.22) with mean described by dissolution profile F (t) of Hixson-
Crowell model (1.16) and a variance s2 (t). The measurements are assumed
to be in two different time instants, t = (t1 , t2 )T . As we have seen in theo-
retical part of this chapter, it allows to use those functions of variance s2 (t),
for which the Fisher information matrix J(t) is singular. Analytical form of
the inverse Fisher information matrix J1 (t) is complicated in all examples
and is not given here.

Example 1
The simplest example of stochastic Hixson-Crowell model is with a constant
variance (3.35). Inverse diagonal functions of the Fisher information matrix
1 1
of Gaussian model (1.22), 1/J11 (t) and 1/J22 (t), are shown in Fig 3.6.
We use this approach for better view as the diagonal functions have large
number of discontinuities. We can see that despite variance (3.35) gives no
single optimal time due to singularity of the Fisher information matrix, for
measurements in two different time instants it gives optimal times for both
parameters.

(A) (B)
5
0.15
(t1 , t2 )

(t1 , t2 )

4
3 0.1
1

2
1/J11

1/J22

0.05
1
0 0
2 2
2 2
1 1
1 1
t1 0 0 t2 t1 0 0 t2

Fig. 3.6: Inverse Rao-Cramer bounds for Gaussian model (1.22) with mean described
by (1.16) of Hixson-Crowell model and variance given by (3.35), a = 0.5, b = 2, r = 0.02.
(A) inverse Rao-Cramer bound of scale parameter a and (B) inverse Rao-Cramer bound of
shape parameter b. The optimal times are topt.a = (0.398, 1.6)T , topt.b = (0.467, 1.647)T .

47
3.3. EXAMPLES

Example 2
As the second example we take stochastic Hixson-Crowell model with vari-
ance without measurement error
 
2 p(1 at)b 1 (1 at)b for t [0, a1 ]
s (t) = , p > 0. (3.46)
0 for t > a1

Inverse diagonal functions of the Fisher information matrix of Gaussian


1 1
model (1.22), 1/J11 (t) and 1/J22 (t), are shown in in Fig 3.7. We can see
that the functions reach their maxima if at least one of the times to measure
is equal to 2, i.e. when 100% is dissolved. This is analogous to the single-
parameter case, where the stochastic homogenous model had the optimal
time at the infinity, i.e. when everything is dissolved, for variance tending to
zero with increasing t.

(A) (B)
500 1
(t1 , t2 )

(t1 , t2 )
400
300
0.5
1

200
1/J11

1/J22

100
0 0
2 2
2 2
1 1
1 1
t1 0 0 t2 t1 0 0 t2

Fig. 3.7: Inverse Rao-Cramer bounds for Gaussian model (1.22) with mean described
by (1.16) of Hixson-Crowell model and variance given by (3.46), a = 0.5, b = 2, p = 0.1.
(A) inverse Rao-Cramer bound of scale parameter a and (B) inverse Rao-Cramer bound
of shape parameter b. The optimal times are topt.a = (1.235, 2)T , topt.b = (1.284, 2)T .

48
3.3. EXAMPLES

Example 3
As the last example we take
 
2 p(1 at)b 1 (1 at)b + r for t [0, a1 ]
s (t) = , p > 0, r > 0.
0 for t > a1
(3.47)
Inverse diagonal elements of the Fisher information matrix of Gaussian model
1 1
(1.22), 1/J11 (t) and 1/J22 (t), are shown in in Fig 3.8. We can see that for
behavior of the Fisher information of stochastic Hixson-Crowell model is
crucial variance which does not tend to zero for t 1/a.

(A) (B)
6 0.15
(t1 , t2 )

(t1 , t2 )
4 0.1
1

1
1/J11

1/J22
2 0.05

0 0
2 2
2 2
1 1
1 1
t1 0 0 t2 t1 0 0 t2

Fig. 3.8: Inverse Rao-Cramer bounds for Gaussian model (1.22) with mean described by
(1.16) of Hixson-Crowell model and variance given by (3.47), a = 0.5, b = 2, p = 0.1, r =
0.015. (A) inverse Rao-Cramer bound of scale parameter a and (B) inverse Rao-Cramer
bound of the shape parameter b. The optimal times are topt.a = (0.431, 1.697)T , topt.b =
(0.533, 1.744)T .

49
Computational procedures and
4
examples

4.1 Simulation of random processes


Sample paths of random processes were used to illustrate the theory studied
in the first chapter. In this section we describe our approach to their numeric
approximation.

Simulation of a Wiener process


Let W (t) be a standard Wiener process given by Definition 1.1. We are look-
ing for its approximation Wn (t) at n + 1 equidistant time instants ti = it,
where i = 0, . . . , n, and t is the time step of the approximation. A com-
monly employed procedure for generating an approximation Wn (ti ) is via the
recursive equation

Wn (ti ) = Wn (ti1 ) + Ni t, Wn (0) = 0, (4.1)

where {Ni }ni=0 is a sequence of (simulated) independent, identically-distributed


Gaussian random variables with ENk = 0 and varNk = 1. For more theoret-
ical details see ref. [28], [31]. It can be easily verified that

EWn (ti ) = 0, varWn (ti ) = ti .

Described approach we use for simulation of the Wiener process in function


wiener(t), where t is vector of time steps and Ni are actually pseudorandom
variables obtained from Matlab library program randn. The function wiener
returns vector of functional values of simulated Wiener process Wn (t) at the
time instants ti . For example, sample path of a Wiener process at the time
interval from t0 = 0 to tn = 5 with step t = 0.1 can be plotted with
command

50
4.2. MAXIMUM LIKELIHOOD ESTIMATION

>> t=0:0.1:5;
>> plot(t, wiener(t));

Function wiener(t) has been used in Matlab programs stochastic.m and


stochastic2.m returning plots of sample paths of random processes (1.22),
(1.34), (1.42) and Wiener process W (t). For more info type help stochastic,
resp. help stochastic2 into Matlab command line.

Numerical solution of stochastic differential equations


Similar approach as in the previous case has been used for numerical solution
of stochastic differential equations. Let (t) be a random process satisfying
stochastic differential equation

d(t) = (t, (t))dt + (t, (t))dW (t), (0) = 0 ,

where and are given functions, and W (t) is a standard Wiener process
given by Definition 1.1. With similar notation as in the previous case we
can obtain numerical approximation n (t) at n + 1 equidistant time instants
ti = it, i = 0, . . . , n, from recursive equation (see ref.[5],[13],[30])

n (ti ) = n (ti1 ) + (ti1 , n (ti1 ))t + (ti1 , n (ti1 ))Ni t (4.2)

where {Ni }ni=0 is a sequence of (simulated) independent, identically-distributed


Gaussian random variables with ENk = 0, varNk = 1 and ti = it is i-th
time step. This approach has been used in Matlab program sde.m which
plots sample paths of random processes given by stochastic differential equa-
tions (1.37) and (1.38). For more info type help sde into Matlab command
line.

4.2 Maximum likelihood estimation


In this Section we introduce some examples of the maximum likelihood esti-
mation applied on Monte-Carlo simulated data. We show the error behavior
of the parameter estimates. The results are based on dissolved fraction data
measured at a single time instant for single-parameter model, resp. at two
time instants for two-parameter model.

Example 1
In the first example we investigate error behavior of the maximum likelihood
estimate of a single parameter a of stochastic homogenous model with mean

51
4.2. MAXIMUM LIKELIHOOD ESTIMATION

described by (1.7). The dissolved fraction data are Monte-Carlo simulated in


Matlab with function generate_homogenous_data(m,t,a,type), where m is
required number of observations at time points given by vector t, a is a scale
parameter of model (1.7) and type determines probability distribution of
simulated data. This parameter can take value g for Gaussian distribution
or l for log-normal distribution. Variance s2 (t) is given by (3.40) with
parameters p = 0.02 and q = 0.0005, which were chosen to obtain standard
deviation s(t) to be around 5% of instantaneous concentration. Output of
the function is matrix [t,C], where the second column C contains simulated
fractions of concentration data at the time instants given by first column t
of the matrix. For example:
>> data = generate_homogenous_data(3, [1.2] , 1, g)
data =
1.2000 0.7567
1.2000 0.7835
1.2000 0.5912

>> data = generate_homogenous_data(1, [0.8 1.2 1.6] , 1, l)


data =
0.8000 0.4523
1.2000 0.7349
1.6000 0.7722

From the simulated data we can obtain maximum likelihood estimates of


the parameter a. Matlab function sp_gmle(data, a0) uses approach given
by (3.29) and assumes Gaussian distribution of the data. The function
sp_lmle(data, a0) uses approach given by (3.34) and assumes log-normal
distribution of the data in the problem. For numeric solution of given equa-
tions we use score method (2.24), because it needs no differentiation of any of
the functions. Matrix data=[t, C] is output of function generate_homogen-
ous_data and variable a0 is an initial approximation of the parameter a. This
initial approximation can bePobtained from function sp_fit(data), which
fits sample average C = m1 m i=1 Ci of the data simulated at a single time
instant t with homogenous dissolution profile (1.7),
1 exp(at) = C,
and returns an initial approximation ab0 obtained from equation
m
!
1 1 X
ab0 = ln 1 Ci ,
t m i=1

52
4.2. MAXIMUM LIKELIHOOD ESTIMATION

where Ci , i = 1, . . . , m, are dissolved fraction data Monte-Carlo simulated at


the time instant t.
Now let us select a = 1 in arbitrary time units. We investigate sample
variance of an estimate a based on m = 4 dissolved fraction data simulated
at a single time instant ti for i = 1, . . . , 11, given by vector
t = (0.4, 0.6, 0.8, 1, 1.2, 1.4, 1.6, 1.8, 2, 2.2, 2.4)T . (4.3)
For example, at the time instant t1 = 0.4 we obtain random sample from
Gaussian distribution
>>data = generate_homogenous_data(4, 0.4, 1, g)
data =
0.4000 0.3900
0.4000 0.3397
0.4000 0.2875
0.4000 0.2116
At first we need an intitial approximation of the parameter. We use approach
described above:
>>a_0=sp_fit(data)
a_0 =
0.9175
At each of the time instants given by (4.3) now we can calculate estimate a

from the functions sp_gmle and sp_lmle. For example, for the data given
above we obtain
>>a_hat = [sp_gmle(data, a_0), sp_lmle(data, a_0)]
a_hat =
0.9174 0.9110
In the next step we calculate sample variance var ( a)2 . For example,
a) (a
from the data given above we obtain
>>var_a_hat = ([1, 1] -a_hat).^2
var_a_hat =
0.0068 0.0079
This estimation procedure is done for every ti given by vector t and for each
time instant we obtain vector of two sample variances, where its components
correspond to specific estimation method. These vectors vary in dependency
on simulated data. To gain a more realistic picture, we take average of these
sample variances. Matlab function sp_point_estimate(r,m,t_i,a,type)
simulate r times m dissolved fraction data at the time instant t_i and returns
average vector of sample variances of estimate a
. They are plotted in Fig. 4.1.

53
4.2. MAXIMUM LIKELIHOOD ESTIMATION

0.02
(A) (B)
0.018 0.016

0.016
a)2

a)2
0.014
(a b

(a b
0.014
0.012
0.012
0.01
0.01

0.008 0.008
0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5
t t

Fig. 4.1: Average sample variances of r = 2000 estimates of a single parameter a of


homogenous model (1.7) based on m = 4 data generated at single time instants ti given
by vector t in (4.3). The average sample variances are marked with symbol o and con-
nected with lines. The dissolved fraction data has mean (1.7), a = 1, and variance (3.40),
p = 0.02, q = 0.0005. In Fig. (A) the dissolved fraction data were simulated from Gaus-
sian distribution, in Fig. (B) the dissolved fraction data were simulated from log-normal
distribution. Gaussian (black) and log-normal (red) maximum likelihood estimation has
been used.

Example 2
In the second example we investigate error behavior of the maximum likeli-
hood estimate of parameter vector (a, b)T of stochastic Weibull model with
mean described by (1.8). The dissolved fraction data are Monte-Carlo sim-
ulated in Matlab with function generate_weibull_data(m,t,a,b,type),
where m is required number of observations at the time instants given by
vector t, a and b are parameters of model (1.8) and type determines proba-
bility distribution of simulated data. This parameter can take value g for
Gaussian distribution or l for log-normal distribution. Variance s2 (t) is
given by function
b b
s2 (t) = 0.02eat (1 eat ) + 0.0005, (4.4)

where its parameters were chosen to obtain standard deviation s(t) to be


around 3% of instantaneous concentration. Output of the function is ma-
trix [t,C], where the second column C contains simulated fractions of con-
centration data at the time instants given by first column t of the matrix.
From the data we can obtain maximum likelihood estimates of the pa-
rameters a and b. Matlab function mp_gmle(data,a0,b0) uses approach

54
4.2. MAXIMUM LIKELIHOOD ESTIMATION

given by (3.29) and assumes Gaussian distribution of the data. The func-
tion mp_lmle(data,a0,b0) uses approach given by (3.34) and assumes log-
normal distribution of the data in the problem. For numeric solution of
given equations we use score method (2.24) again. Matrix data=[t C] is
output of function generate_weibull_data and variables a0 and b0 are ini-
tial approximations of the parameters a and b. This initial approximation
can be obtained
P from function mp_fit(data), which fits sample averages
Cj = m1 m C
i=1 ij of the data simulated at the time instant tj , j = 1, 2, with
Weibull dissolution profile (1.8), i.e.

1 exp(atb1 ) = C1 , 1 exp(atb2 ) = C2 ,

and returns an initial approximation ab0 and bb0 obtained from equations
 ! ln tln1 ln
t2
t2
 ln 1 C2
ab0 = ln 1 C2  ,
ln 1 C1
 
ln(1C2 )
ln ln(1C1 )
bb0 = .
ln t2 ln t1
Let us select a = 1, b = 2 in arbitrary time units. We investigate sample
variance of estimates a , b based on m = 4 dissolved fraction data simulated
at two time instant ti , ti + 0.5 for i = 1, . . . , 11, given by vector

t = (0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 1.1)T . (4.5)

For example, at the time instant t1 = 0.3 we obtain random sample from
Gaussian distribution
>>data = generate_weibull_data(4, [0.3 0.8], 1, 2, g)
data =
0.3000 0.0924
0.3000 0.1178
0.3000 0.0970
0.3000 0.0670
0.8000 0.3936
0.8000 0.5604
0.8000 0.4801
0.8000 0.4704
At first we need intitial approximations of the parameters. We use approach
described above:

55
4.2. MAXIMUM LIKELIHOOD ESTIMATION

>>theta_0=mp_fit(data)
theta_0 =
0.9926
1.9212

, b of the parame-
At each of the time instants now we can obtain estimates a
ters a and b from the functions mp_gmle and mp_lmle. For example, for the
data given above we obtain

>>[a_g_hat b_g_hat]=mp_gmle(data, theta_0(1), theta_0(2));


>>[a_l_hat b_l_hat]=mp_lmle(data, theta_0(1), theta_0(2));
>>theta_hat=[a_g_hat, b_g_hat; a_l_hat, b_l_hat]
theta_hat =
1.0091 1.9986
0.9813 1.8099
 
In the next step we calculate sample variances var ( a) and var b
a) (a2

(b b)2 . For example, from the data given above we obtain

>>var_theta_hat= ( [1, 2; 1, 2 ] - theta_hat ).^2


var_theta_hat =
0.0001 0.0000
0.0004 0.0361

This estimation procedure is done for every ti given by vector t and for each
time instant we obtain matrix of two sample variance in two columns, first
for parameter a and second for b, where rows of the matrix correspond to
specific estimation method. These vectors vary in dependency on simulated
data. To gain a more realistic picture, we take average of these sample
variances. Matlab function mp_point_estimate(r,m,t_1,t_2,a,b,type)
simulate r times m dissolved fraction data at the time instant t_1 and t_2
and returns average matrix of sample variances of estimates a and b. They
are plotted in Fig. 4.2 and Fig. 4.3.

56
4.2. MAXIMUM LIKELIHOOD ESTIMATION

0.25
0.04 (A) (B)

0.2
0.03
a)2

(b bb)2
(a b

0.15
0.02

0.01 0.1

0 0.05
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1
t t

Fig. 4.2: Average sample variances of r = 2000 estimates of parameters a and b of


Weibull model (1.8) based on m = 4 data generated at time instants ti and ti + 0.5, where
ti is given by vector t in (4.5). The average sample variances are marked with symbol
o and connected with lines. The dissolved fraction data are simulated from Gaussian
distribution with mean given by (1.8), a = 1, b = 2, and variance (4.4). Gaussian (black)
and log-normal (red) maximum likelihood estimation has been used. (A) Average sample
a and (B) average sample variances of estimate bb.
variances of estimate b

0.025
(A) (B)
0.15
0.02
a)2

(b bb)2
(a b

0.015

0.1
0.01

0.005
0.05
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1
t t

Fig. 4.3: Average sample variances of r = 2000 estimates of parameters a and b of


Weibull model (1.8) based on m = 4 data generated at time instants ti and ti + 0.5, where
ti is given by vector t in (4.5). The average sample variances are marked with symbol
o and connected with lines. The dissolved fraction data are simulated from log-normal
distribution with mean given by (1.8), a = 1, b = 2, and variance (4.4). Gaussian (black)
and log-normal (red) maximum likelihood estimation has been used. (A) Average sample
a and (B) average sample variances of estimate bb.
variances of estimate b

57
Summary

We have presented basic models of dissolution and their stochastic modifi-


cations, including a new model based on the theory of stochastic differential
equations. Theory of the Fisher information was applied on the stochastic
models in order to obtain the optimal times to measure the experimental
dissolution data. The parameters of the studied dissolution models were
estimated by maximum likelihood method and Matlab procedures were pre-
sented in the last chapter.

58
Bibliography

[1] Andel J. (1978) Matematicka statistika, SNTL, Praha.

[2] Andel J. (2005) Zaklady matematicke statistiky, MATFYZPRESS,


Praha.

[3] Costa P., Lobo J. M. S. (2001) Modeling and comparison of dissolution


profiles, Eur. J. Pharm. Sci. 13: 123-133.

[4] Cramer, H. (1946) Mathematical Methods of Statistics, Princeton Uni-


versity Press, Princeton.

[5] Higham J.D. (2001) An algorithmic introduction to numerical simulation


of stochastic differential equations, Siam Rev. Vol. 43, No.3:525-546.

[6] Dokoumetzidis A., Macheras P. (2001) A century of dissolution research:


From Noyes and Whitney to the Biopharmaceutic classification system,
Int. J. Pharm. 321: 1-11.

[7] Dokoumetzidis A., Macheras P., Papadopoulou V. (2006) Analysis of


dissolution data using modified version of Noyes-Whitney equation and
the Weibull function, Int. J. Pharm. 321: 1-11.

[8] Dosly O. (2005) Zaklady konvexn analyzy a optimalizace v Rn ,


Masarykova univerzita, Brno.

[9] Evans C. L.: An introduction to stochastic differential equations v1.2


URL: <math.berkeley.edu/evans/SDE.course.pdf>

[10] Fenton L.F. (1960) The sum of lognormal probability distributions in


scatter transmission systems, IRE Trans. Commun. Syst. CS-8:57-67.

[11] Goldsmith J. A., Randall N., Ross S. D. (1978) On methods of expressing


dissolution rate data, J. Pharm. Sci. 30: 347-349.

[12] Hixson A. W., Crowell J.H. (1931) Dependence of reaction velocity upon
surface and agitation, Ind. Eng. Chem. 23: 923-931.

[13] Kloeden P.E., Platen E. (1999) Numerical solution of Stochastic Differ-


ential Equations, Springer, Berlin.

[14] Langenbucher F. (1972) Linearization of dissolution rate curves by the


Weibull distribution, J. Pharm. Pharmacol. 24: 979-981.

59
BIBLIOGRAPHY

[15] Lansky P., Lanska V., Weiss M. (2004) A stochastic differential equation
model for drug dissolution and its parameters, J. Control. Release 100:
267-274.

[16] Lansky P., Pokora O., Rospars J. P. (2007) Stimulus-Response Curves in


Sensory Neurons: How to Find the Stimulus Measurable with the High-
est Precision, In Advances in Brain, Vision, and Artificial Intelligence,
Lecture Notes in Computer Science, Berlin / Heidelberg : Springer
(2007):338-349.

[17] Lansky P., Weiss M. (2003) Classification of dissolution profiles in terms


of fractional dissolution rate and a novel measure of heterogeneity,
J. Pharm. Sci. 8: 1632-1647.

[18] Lansky P., Weiss, M. (2003) Does the dose solubility ratio affect the
mean dissolution time of drugs?, Pharm. Res. 8: 1632-1647.

[19] Lee C.J. et al. (1999) Analysis of drug dissolution data, Statistics in
medicine 18: 799-814.

[20] Lehmann E.L., Casella G. (1998) Theory of Point Estimation, Springer,


New-York.

[21] Log-normal distribution


URL: <http://en.wikipedia.org/wiki/Log-normal distribution>

[22] Normal distribution


URL: <http://en.wikipedia.org/wiki/Normal distribution>

[23] Noyes A. A., Whitney, W. R. (1897) Uber die Auflsungsgeschwindigkeit


von festen Stoffen in ihren eigenen Lsungen, Z. Physik Chemie 13: 689-
692.

[24] OHara T., Dunne A., Butler J., Devane J. (1998) A review of methods
used to compare dissolution profile data, Pharm. Sci. & Tech. Today 5:
214-223.

[25] ksendal B. (1998) Stochastic Differential Equations: An Introduction


with Applications, Springer, Berlin.

[26] Polli J. E., Rekhi G. S., Augsburger L. L., Shah V. P. (1997) Meth-
ods to compare dissolution profiles and a rationale for wide dissolution
specification for metoprolol tablets, J. Pharm. Sci. 6: 690-700.

60
BIBLIOGRAPHY

[27] Rao C.R. (1962) Linear Statistical Inference and its Applications, Wiley,
New-York.

[28] Ross S.M. (1992) Stochastic processes, Wiley, New York.

[29] Stoica, P. (2001) Parameter estimation problems with singular informa-


tion matrices, Signal Processing 49:87-90.

[30] Tuckwell H.C, Lansky P. (1997) On the simulation of biological diffusion


process, Comput. Biol. Med., Vol. 27, No 1:1-7.

[31] Vanden-Eijnden E.: Wiener Process


URL: <http://www.cims.nyu.edu/eve2/chap4.pdf>

[32] Vudathala G. K., Rogers J.A. (1992) Dissolution of fludrocortisone from


phospholipid coprecipitates, J. Pharm. Sci. 82: 282-286.

[33] Weber-Fechner law


URL: <http://en.wikipedia.org/wiki/Weber-Fechner law>

[34] Zeev, E. (1997) On the variability of dissolution data, Pharm. Res. 10:
1355-1362

61

Vous aimerez peut-être aussi