Vous êtes sur la page 1sur 3

Peterson 0

Jacob Peterson

Ms. Anne Torgersen

English 12 - Period 8

February 12th, 2018

AACC #3

Annotations

● The field of artificial intelligence (AI) artificial intelligence (AI) seeks to infer

general principles of intelligence by analyzing human processes, and then to

encode them in software.

● Neural networks is motivated by the observation that the biological brain is the

only device capable of intelligent behavior.

● A neural network can be designed to approach such a problem by defining each

unit to represent the truth value of a specific hypothesis.

● A network can represent hypotheses as nodes and constraints among

hypotheses as weight values between pairs of nodes

● Neural networks have been applied to many problems of this kind, often

providing a better solution than other heuristic techniques.

● One of the most attractive features of neural network models is their capacity to

learn.

● Since the response properties of the network are determined by connectivity and

weight values, a learning procedure can be expressed mathematically as a


Peterson 1

function that determines the amount by which each weight changes in terms of

the activities of the neurons.

● Typically, a network that acquires information (learns) is trained to learn the

relationship between a set of input and output variables that have been

independently measured across many examples.

Abstract

The article contains a lot of technical information on how and why certain Neural

Networks in Artificial Intelligence work. It shows a diagram with numbered dots (known

as ‘nodes’) and arrows pointing throughout the diagram (known as ‘links’ or

‘connections’). It goes into great detail explaining the diagram and its parts and the

correspondence between this diagram and the fundamentals of a human brain.

Hard-Wired Neural networks are a topic of interest in this article as well. The article

explains that Hard-Wired Neural Networks are important for statically executed

processes such as getting from point ‘A’ to ‘B’ using dynamic inputs. On the contrary,

Learning Neural Networks work much differently. Learning Neural Networks take a

dynamic set of inputs consistently and progressively learn, developing new neural paths

within its node network. The article then addresses the possible use of Neural Network

analyzers to gain insight into the progression and stage of a neural network.

Contextual Connection

I never knew just how vastly complex a neural network can be. I learned the technical

details about different types of neural networks which I found very interesting. I wish the

author had included more information about the potential dangers of Learning Neural
Peterson 2

networks getting out of hand. The author did include information about software that

analyzes the neural network for any data engineers may need, but I wish the author

would have expanded on the output and use of that information to control the network if

necessary.

Source

Munro, Paul. "Neural Networks." ​Computer Sciences​, edited by K. Lee Lerner and

Brenda Wilmoth Lerner, 2nd ed., Macmillan Reference USA, 2013. ​Science in Context​,

http://link.galegroup.com/apps/doc/CV2642250174/SCIC?u=pioneer&xid=b278274b.

Accessed 20 Feb. 2018.

Vous aimerez peut-être aussi