Vous êtes sur la page 1sur 1

 A machine is thought intelligent if it can achieve  The error on the validation is monitored during the

human-level performance in some cognitive task. training process, the error normally decreases until
 To build an intelligent machine, we have to capture, the data begins to overfit the data.
organize and use human expert knowledge in some  The test set error to compare different model.
problem area.  Monotonic selection means the value on the
 Artificial intelligence is a science that has defined output can be estimated directly from a
its goal as making machines do things that would corresponding membership.
require intelligence if done by humans.
 Intelligence is the ability to learn and understand,
to solve problems and to make decisions.
 machine learning involves adaptive mechanisms
that enable computers to learn from experience,
 A neural network can be defined as a model of
reasoning based on the human brain.
 Step and sign functions are hard limiters and used
in decision-making for classification and pattern
recognition.
 Sigmoid function is used in back-propagation
networks.
 Linear function is used for linear approximation.
 Linearly separable functions are functions can be
separated by one hyper plane.
 A multilayer perceptron is a feedforward neural
network. The input layer is propagated in a forward
direction.
 Neurons in the hidden layer detect the features.
 With one hidden layer we can represent any
continuous function of the input signals and with
two hidden layers even discontinuous function can
be represented.
 A hidden layer hides its desired output, so they
cannot be observed.
 The desired output of the hidden layer is
determined by the layer itself.
 Practical applications use only three layers because
additional layer increases the computational
burden exponentially.
 Initial weights in back-propagation 2.4/(total
number of inputs).
 Biological neurons do not work backward to adjust
the strength or their interconnections and thus
backpropagation cannot be viewed as a process
that emulates brain-like learning.
 A multilayer network, in general, learns much
faster when the tanh is used, including momentum
term and by using adjustable learning rate.

Vous aimerez peut-être aussi