There are some operations that have costly running times,
but the worst possible average running time of the same operation over a sequence of several operations is much less costly. In these situations, we say that the amortized worst case running time of the operation is the latter value. This claim is a stronger claim, or more accurate analysis of an operation than !ust finding it"s worst case running time for a single operation. #onsider the clearable table data structure, that supports two operations$ %& Add an entry. '& #lear the entire table. An add operation always ta(es )*%& time. A clear operation ta(es )*(& time where there are ( items currently in the table. +tarting with an empty table, when we run n operations, it is possible that a single clear operation is *n&, since the table could have up to n% items in it. ,ow, let"s do the amortized analysis$ )ur operations can be categorized as a sequence of several adds followed by a clear repeated several times. If there are ( clear operations out of n total operations, then there are n( add operations. -.T, the ma/imum running time of all the clears is the total number of elements that get cleared, which is n(. Thus, the ma/imum running time of n operations is '*n(& 'n. Thus the average time for an operation in a clearable table is less than 'n0n 1 '. Thus, even though the worstcase time for a clear operation is )*n&, the amortized worstcase time of add and clear operations over n operations is )*%&. Amortization Techniques There are a couple different techniques that aid in amortized analysis$ the accounting method and the potential function. 2ven though the names of methods are different, they are quite similar. In each you trac( simple operations with money or a potential function. In the accounting method you start with a certain amount of money and each simple operation costs one dollar. 3ou must show that you don"t spend all of your money after performing n consecutive operations. 4egardless of which analogy you use, the (ey in amortized analysis is to determine the worstcase running time of a certain number of consecutive operations. As an e/ercise for you guys, compute the amortized worst case running time of n consecutive variablepopping stac( operations. A variable popping stac( has two operations$ %& push*/&, pushes the element / onto the stac(. '& pop*(&, pops the top ( elements off the stac(. The running time of the pop*(& operation is )*(&, since each element must technically be popped off individually. Amortization$ 2/tendable Array Implementation 5ava provides a dynamic 6ector class. In essence, a vector is an array that automatically grows when necessary. Although this seems li(e a simple operation, in reality, a whole new array has to be allocated anytime the size of an array is changed. Thus, adding an element to an array that forces the array to e/tend itself may be a very costly *)*n&, where n is the current number of elements in the array& operation. ,ow the question is, can we e/tend an array in such a manner that the amortized worstcase running time of a set of array operations is )*%&. The answer, as you might imagine is yes. 7ere is how$ every time the array needs to be resized, double the size of the array. 8iven an array with n items in it, the worst case is that the array is full. If this is so, consider any n consecutive operations of the array. ,o matter what, we will at most double the size of the array once. 9hen this occurs, we"ll have to allocate the new space and copy over each element one by one. *7opefully a full e/planation of this was given in #+'.& 4egardless, the total number of steps necessary for an add operation when the array is resized is appro/imately n steps. :or each of the other n% adds, we only use one simple step. Adding, we get n;*n%& simple steps which is essentially 'n meaning that the amortized worst case algorithm for using an extendable array is )*'n0n& or )*%&. 7ere is one way to !ustify this$ 2very time you do an add, imagine paying <'. 9hen it=s a simple add, <% goes into a ban(. 9henever you have to e/pand the array, you=ll have enough money in the ban( to do so. Amortized Analysis of a -inary #ounter #onsider a binary counter of nbits, that counts from > to ' n %. :or e/ample, here is the counter for n1?$ >>>> >>>%, % bit flipped >>%>, ' bits flipped >>%%, % bit flipped >%>>, @ bits flipped >%>%, % bit flipped >%%>, ' bits flipped >%%%, % bit flipped %>>>, ? bits flipped %>>%, % bit flipped %>%>, ' bits flipped %>%%, % bit flipped %%>>, @ bits flipped %%>%, % bit flipped %%%>, ' bits flipped %%%%, % bit flipped 9e might be interested in the total number of bit flips this counter performs. The worst case of a single operation is flipping all n bits. This occurs when we flip from >%%%%...% to %>>>>...>. -ut, over the course of the whole counter, we can show that the total number of operations averages !ust under ' bit flips. :rom the e/ample above, we see that we are ma(ing ' n % counter changes. )f these, ' n% contain % flip *this is every other&, then ' n' contain ' flips *this is every fourth, etc. In fact, we can construct a chart li(e so$ ,umber of bit flips ,umber of times this occurs % ' n% ' ' n' @ ' n@ ... ... ( ' n( ... ... n % To count the total number of bit flips, we need to simply multiply the numbers on each column of this table then add all of those up. Aet + 1 this sum. + 1 %/' n% ; '/' n' ; @/' n@ ; ... ; ; n/' > ,ow, multiply this equation by ' to yield$ '+ 1 %/' n ; '/' n% ; @/' n' ; ... ; ; n/' % But this equation right above the other and subtract the other one from it li(e so$ '+ 1 %/' n ; '/' n% ; @/' n' ; ... ; n/' % + 1 %/' n% ; '/' n' ; @/' n@ ; ... ;*n%&/' % ; n/' > CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC C + 1 %/' n ;%/' n% ; %/' n' ; %/' n@ ; ... ;%/' % n + 1 ' n;% ' n, utilizing the formula for a finite geometric sequence. Thus, to get the average number of bit flips, ta(e + and divide by the total number of operations, ' n %. 9e can very easily show that the result of this division is !ust 1 2 2
n n , a value that approaches ' from below as n gets large.