Vous êtes sur la page 1sur 5

Assignment 1

Problem 1.2
Peter Szemraj
1. Results

( )

( )

( )

First, I proved that the limit as F(x) approaches zero is 1/2. I did this by taking the derivative of
the numerator twice, and the derivative of the denominator twice. This is attached on a handwritten
proof. Then I proved that the original function that uses cosine is equivalent to the function on the right
that uses sine. I mainly proved this using half angle identities. This proof is also handwritten and
attached. From there, I simply defined x as a logspace as I was told to, and iterated through it in a for
loop calculating the y (or F(x) value) and stored in a vector. I then graphed the logspace vector against
each respective set of y-values that I obtained from my code.

2. Discussion:
I think my algorithm for solving the problem is the best because it is the simplest. It does not
really require any complex knowledge or twisting around of anything, but simply just requires a simple
substitution based on the half angle formula. This leaves as little as possible open to error. I also think
that my use of a for loop is also an extremely simple method to solving this problem.
What the point of graphing these two different equations is that one of them illustrates
(partially) that the limit of F(x) as it approaches zero is . The other equation consistently shows a flat
line at y=1/2. The way we tried to simulate a limit is by plugging in some extremely small x values into
each function, and seeing if the resulting y value was similar to what it was expected to be (again ).
This was done using the logspace function to generate a set of x values in the order of .1 to .9 *10-5.
Therefore a log scale was used. Each of the y values should be close to , as this is the number expected
close to zero, and there is not enough X variation to really see a drastic difference in the y-value. From
looking at the graph, we can see that this is not the case with the cosine function.
I believe that the error in the cosine function occurs because of subtractive cancellation.
Subtractive cancellation occurs when two numbers that are almost identical are subtracted from one
other essentially cancel because the difference is so small, when it is represented in a computer can be
represented as zero. I believe that this is the case when we look at our cosine function. When x is very
small, say .000000001 or so, the cosine function will make it very close to zero. For example,
cos(.000000001) could be .9999999999999999999 or so (I am assuming a lot of decimal points because
this seems to even trump the accuracy of double precision). So when we compute the numerator as 1cos(x), the result is a very small number that is rounded to zero, due to storing limitations. While this is
normally mitigated by using double precision, I am assuming that the value is so small that it is still lost
to the subtractive cancellation (and therefore round-off error). In fact, if we take a look at the graph, we
can see that the cosine function appears to start failing extremely hard at around 10-8. I believe that this
is actually due to the subtractive cancellation. If we attempt to compute cos(10-8) in matlab, we receive
the value as 1, and then the numerator would be 1-1, which is zero. This is why for every x value smaller
than 10-8 the cosine function shows the y value is zero. This is incorrect, as then our attempt at finding
the limit at zero by implementing very small x values does not work. To find out exactly why this occurs I
computed cos(10-8) in wolfram alpha and received: 0.999999999999999950000000000000000416
There are sixteen 9s represented in the decimal place, and then a 5. Because in double precision the
mantissa can only store numbers on the maximum/minimum order of 253, or sixteen decimal places, the
decimal approximation with the cosine function is rounded to 1, and thus we receive subtractive
cancellation. This is also why the cosine function starts showing zero as the value around this number.
The explanation for why the cosine function shows y values greater than the actual maximum of the
function (around the order of 10-7 to 10-8) is tougher to explain, but I believe it is due to the fact that
extremely small decimals can only be represented to a certain extent in binary. The mantissa can only
represented a certain number of decimals (and numbers for that matter), and with extremely small
decimals it will not be able to fully represent a decimal. I believe this round-off error causes the strange
deviations in the graph, such as the point around (9.5*10-7,.8) for example.

As can be easily observed on the graph, the sine function has an extremely accurate
representation of the value of f(x) throughout all the x values chosen. As was proven, the sine function is
equivalent to the cosine function. The sine function mitigates this error because it does not include any
subtraction, so there can be no subtractive cancellation. Instead, in floating point multiplication and
division, the exponents are added or subtracted, and then the mantissas are multiplied. This leaves
much less room for round-off error and subtractive cancellation. This is why we can see that the sine
function gives a consistent result of f(x) throughout the domain of x-values used.
Error analysis:
I described a lot of the errors above as they were specific questions asked for the discussion, but
I will summarize them again here. All functions that I am referring to are again written on the top of the
first page. The cosine function is prone to two errors. First and foremost, it is prone to subtractive
cancellation, which occurs when two extremely similar numbers are subtracted in floating point
representation. In this case, the floating point representation rounds cos(10-8) and smaller numbers to 1.
This causes the two numbers to cancel, and the function appears to be zero for 10-8 and smaller. The
second error is due to the fact that only a finite number of quantities can be represented in binary.
When the result yielded by cos(x) is very small, it cannot be depicted in binary extremely accurately. This
round-off error can cause errors in the function, making it either greater or smaller than it is supposed
to. I believe this is why points such as (9.5*10-7,.8) are observed. The sine function gets around these
errors because instead its operations only involve multiplication and addition, which is much cleaner as
discussed above. The sine function itself is an embodiment of how to improve the error in this scenario.

%problem 2
%Peter Szemraj
%CHBE 305
x=logspace(-15,-5,100);
old_way=zeros(1,100);
revised_way=zeros(1,100);
%compute using original equation
for i=1:100
old_way(i)=(1-cos(x(i)))/(x(i)^2);
end
%revised way found using substitution
for i=1:100
revised_way(i)=(2*sin(x(i)/2)^2)/(x(i)^2);
end
figure(1)
hold on
title('Accuracy of sine vs cosine equations')
plot(x,old_way,'r-o')
plot(x,revised_way,'k-+')
xlabel('X values (log scale)')
ylabel('Y values')
legend('(f(x) using cosine','f(x) using sine')
set(gca, 'XScale', 'log')

Published with MATLAB R2013a

Vous aimerez peut-être aussi