Vous êtes sur la page 1sur 8

1.

Howmanybitsofinformationcanbemodeledbythe
vectorofhiddenactivities(ataspecifictime)ofa
RecurrentNeuralNetwork(RNN)with16logistic
hiddenunits?
16
2
4
Issonodeveserselecionado

>16

1/1
pontos

2.
Thisquestionisaboutspeechrecognition.To
accuratelyrecognizewhatphonemeisbeingspoken
ataparticulartime,oneneedstoknowthesound
datafrom100msbeforethattimeto100msafterthat
time,i.e.atotalof200msofsounddata.Whichof
thefollowingsetupshaveaccesstoenoughsound
datatorecognizewhatphonemewasbeingspoken
100msintothepast?
ARecurrentNeuralNetwork(RNN)with200msofinput
Correto

AfeedforwardNeuralNetworkwith30msofinput
Noselecionadoestcorreto

ARecurrentNeuralNetwork(RNN)with30msofinput
Correto

RNNcanreliablymemorizelongtermhistoryduetoitstemporalconnectionsbetween
hiddenstates.

AfeedforwardNeuralNetworkwith200msofinput

Correto

AfeedforwardNeuralNetworkcanbeusedonlyincaseitisgivenalltheinputitneeds
fortherecognitiontaskbecauseitdoesn'tstoretheinputhistory.

0/1
pontos

3.
ThefigurebelowshowsaRecurrentNeuralNetwork(RNN)withoneinputunitx,onelogistic
hiddenunith,andonelinearoutputunit
y

.TheRNNisunrolledintimeforT=0,1,and2.

Thenetworkparametersare:Wxh
y bias = 0.0

.Remember,(k)

= 0.5

,Whh

1
1+exp(k)

,Why

= 1.0

,hbias

= 0.7

= 1.0

,and

Iftheinputxtakesthevalues9, 4, 2 attimesteps0, 1, 2respectively,whatisthevalueofthe


hiddenstatehatT

= 2

?Giveyouranswerwithatleasttwodigitsafterthedecimalpoint.

Digitesuarespostaaqui
Respostaincorreta

0/1
pontos

4.
ThefigurebelowshowsaRecurrentNeuralNetwork(RNN)withoneinputunitx,onelogistic
hiddenunith,andonelinearoutputunit
y

.TheRNNisunrolledintimeforT=0,1,and2.

Thenetworkparametersare:Wxh
y bias = 0.0

,Whh

= 0.1

= 0.5

,Why

,hbias

= 0.25

= 0.4

,and

Iftheinputxtakesthevalues18, 9, 8attimesteps0, 1, 2respectively,thehiddenunitvalues


willbe0.2, 0.4, 0.8andtheoutputunitvalueswillbe0.05, 0.1, 0.2(youcancheckthesevalues
asanexercise).Avariable isdefinedasthetotalinputtothehiddenunitbeforethelogistic

asanexercise).Avariablezisdefinedasthetotalinputtothehiddenunitbeforethelogistic
nonlinearity.
Ifweareusingthesquaredloss,withtargetst0 , t1 , t2 ,thenthesequenceofcalculationsrequired
tocomputethetotalerrorEisasfollows:
z0 = Wxh x0 + h bias

z1 = Wxh x1 + Whh h 0 + h bias

z2 = Wxh x2 + Whh h 1 + h bias

h 0 = (z0 )

h 1 = (z1 )

h 2 = (z2 )

y 0 = Why h 0 + y bias

y 1 = Why h 1 + y bias

y 2 = Why h 2 + y bias

E0 =

1
2

(t0 y )

E1 =

1
2

(t1 y )

E2 =

1
2

(t2 y )

E = E0 + E1 + E2

Ifthetargetoutputvaluesaret0

andthesquarederrorlossisused,

= 0.1, t1 = 0.1, t2 = 0.2

whatisthevalueoftheerrorderivativejustbeforethehiddenunitnonlinearityatT

= 1

Writeyouransweruptoatleastthefourthdecimalplace.

Digitesuarespostaaqui
Respostaincorreta
E

Weneedtocalculate z .Wecandothisbybackpropagation:
1

E
=
z1
E1
z1
E2

E1
z1

z1

E2
z1

y 1

h 1

h 1

z1

E2

y 2

h 2

z2

h 1

y 2

h 2

z2

h 1

z1

Thefirstequalityholdsbecauseonlytheerrorattimesteps1and2dependonz1 .

1/1
pontos

5.

(i.e. E )?
z 1

ConsideraRecurrentNeuralNetworkwithoneinput
unit,onelogistichiddenunit,andonelinearoutput
unit.ThisRNNisformodelingsequencesof
length4only,andtheoutputunitexistsonlyatthe
lasttimestep,i.e.T=3.ThisdiagramshowstheRNN
unrolledintime:

Supposethatthemodelhaslearnedthefollowing
parametervalues:
w xh = 1
w hh = 2
w hy = 1

Allbiasesare0
Foronespecifictrainingcase,theinputis1atT=0
and0atT=1,T=2,andT=3.Thetargetoutput(at
T=3)is0.5,andwe'reusingthesquarederror
lossfunction.
E

We'reinterestedinthegradientforw xh ,i.e. w .
xh

Becauseit'sonlyatT=0thattheinputisnotzero,
andit'sonlyatT=3thatthere'sanoutput,the
errorneedstobebackpropagatedfromT=3toT=0,
andthat'sthekindofsituationswhereRNN'soften
geteithervanishinggradientsorexploding
gradients.Whichofthosetwooccursinthis
situation?
Youcaneitherdothecalculations,andfindthe
answerthatway,oryoucanfindtheanswerwith
morethinkingandlessmath,bythinkingaboutthe
slope

y
z

ofthelogisticfunction,andwhatrolethat

playsinthebackpropagationprocess.

Explodinggradient
Vanishinggradient
Correto

1/1
pontos

6.
ConsiderthefollowingRecurrentNeuralNetwork
(RNN):

Asyoucansee,theRNNhastwoinputunits,two
hiddenunits,andoneoutputunit.
Forthisquestion,everyarrowdenotestheeffectofa
variableattimetonavariableattimet

+ 1

WhichfeedforwardNeuralNetworkisequivalentto
thisnetworkunrolledintime?

Correto