Vous êtes sur la page 1sur 6

Measuring the acceleration due to gravity I set out to measure g, the acceleration due to gravity.

Gravity is a constant, so objects acceleration via gravity happens at the constant rate, minus counterforces. Acceleration is a vector quantity, and g acts vertically downwards. Hence, to measure g, I would use free fall, which is the motion of an object undergoing acceleration of g (or in simple terms an object being dropped). The formula for calculating the uniform acceleration (which in this experiment is g) follows:

(v u ) t where

u initial velocity v final velocity a acceleration t time taken


From this formula, I knew that I could calculate gravity with the initial velocity of an object at point A, the time taken for it to drop from point A to point B, and its velocity at B. Equipment list 2 light gates Plastic tube (1m) Single segment card (30mm) Sellotape Measuring tape 2 clamp stands Cardboard tube EasySense data logger Procedure I clamped the plastic tube with clamp stands at both ends, so that it stood vertically. The top clamp sat on a table, as you can see in the diagram. I used Sellotape to join the single segment card with the cardboard tube. It faced horizontally outwards. Then, using two clamps on the two stands, I attached two light gates. They were positioned so that the gap between the sensors was parallel to the plastic tube, at around 20mm away. The distance between the two light gates was measured with a tape measure at two corresponding points. There was over 10cm of vertical distance under the lower light gate, so that the segment card could fall fully through the gate (and thus not disrupt the sensing). Both light gates were then plugged into the one data logger. I set up the recording Page | 1 Diagram of experiment

software on the data logger to record time taken for the card to travel from A (high light gate) to B (low light gate), as well as the velocity at A and B. I could then proceed with the experiment. Holding the card slightly above point A, I angled it so it would fall straight through A, and naturally B (as they were parallel). I then dropped the card. The time (measured in seconds to 3 D.P.), the velocities at A (m/s to 3 d.p.) and the velocities at B (m/s to 3 d.p.) were copied by hand from the data logger after each drop into a table. I dropped the card 20 times and recorded these results, taking special care to avoid going through the light gates when carrying it back up to above A for the successive drop. This involved rotating the cardboard tube. After I had compiled 20 results, I moved onto another distance. Changing the distance involved moving one of the light gates vertically. To do this, I slipped one of the light gate clamps down or up on its stand and/or used a base of textbooks to raise whichever clamp stand (and thus light gates) needed raising. The distances themselves were thought through systematically to avoid measuring two distances very close to each other whilst leaving a large distance gap between other sets. After recording 6 sets of results at 6 different distances, I finished the experiment and packed everything away. I hypothesised that the longer the distance (of the free fall), the more accurate the calculated value of g would be. My reasoning was uncertainty from measuring instruments would affect the calculations less. Errors, both systematic and random, will distort the results less, so the acceleration calculated would be closer to the theoretical value 9.81. This is the documented figure I obtained from NASA, a reliable source - http://adsabs.harvard.edu/abs/1967RSPTA.261..211C.

Assessment of uncertainty and systematic error I measured the velocity and time to 3s.f. This means each time or velocity value has an uncertainty of 0.005. Using these values I calculated the highest possible values for each result (with the help of Excel formulae). I subtracted the original value from this, thus leaving me with a value for uncertainty. The uncertainty, as you can see in the tables, decreases as the free-fall length is increased. The actual inaccuracy of the equipment I used stays the same, but it has a bigger effect on the accuracy of the smaller values. The equipment I used can be deemed to be very accurate, as the uncertainty shows. The percentage uncertainty throughout the whole experiment stays very low under 1% in all results, ranging from a mean average of 0.538% on 20cm to 0.208% on 95cm. Although the uncertainty is small, I feel I could have reduced the uncertainty even further. I could have recorded results to more decimal places, as opposed to the 3 D.P. I used. This would cut down the random error in my measurements. This uncertainty was caused by limitations of my experimental procedure. One large systematic error I identified was air resistance. Air resistance lowered all of my results for the acceleration, as it acted as a counter force against the velocity the tube was gaining each time. However, I could not have prevented this systematic error unless I utilised a vacuum, which would be far too impractical and expensive. I calibrated my tape measure, by measuring a metre ruler with it. As I knew the metre rulers length, I could see how accurately the tape measured it, and thus work out if it was giving me systematic error. It was accurate. Page | 2

I also reduced a systematic error considerably halfway through the experiment. Originally, I used the distance and time from A to B to calculate g. The distance I measured seemed too short, as there was a systematic error in which g was far too high. This was due to me not dropping it at A with absolutely zero velocity (which is practically impossible). In response to this, I changed what I was measuring. Instead of distance, I measured the velocity at each point. Thus, using the formula on page 1, I could derive the acceleration between the two points. This proved far more effective, accurate and reliable, as it meant not only that a systematic error was eliminated, but the fact that the velocity at A varied before meant the results had a better standard deviation this time round (see tables). I also calibrated the EasySense data logger. This was appropriate, as if the logger was not calibrated during the experiment, the results would have been skewed by a systematic error. The data logger had no response time, so there was no margin for error there. One improvement I tried out was to tilt the light gates vertically. I believed there was a systematic error, and perhaps I had set up the light gates wrong so the data logger recorded values for acceleration too highly. I tried out a variety of angles, but it seemed to make no difference to the results. One major limitation to my experiment was how the tube was dropped. Especially so on the larger distances, I had to drop the tube precisely to avoid it hitting either the table it was based on, or either of the sensors. If it hit anything, its velocity would instantly decrease, resulting in a flawed value for g. I would try to be as accurate as possible when dropping the tube, but occasionally it would hit something. I would watch and listen to the drop, so I could tell when it did happen. Then, I would not note the result. A solution another individual used was to steer the tube through the light gates with metre rulers. However, I stayed away from this solution, as although it made the experiment easier, it would introduce a systematic error (the rulers would cause friction with the tube and slow it down). Despite this, this part of the procedure was a major limitation, especially as in my method there was still some friction between the two tubes, causing unknown uncertainty to the results.

Page | 3

10.600

How the value of measured gravity changes with length of freefall 10.400
10.200 Acceleration measured (ms-2)

10.000

9.800 Linear (Acceleration ) 9.600

9.400

9.200

9.000 0 20 40 60 80 100 120 Length of free fall (cm)


Figure 1

Velocity at B (final) minus velocity at A (initial) (ms-1)

Measuring acceleration due to gravity using a line of best fit

3.500

3.000 y = 9.778x + 0.0208

2.500

2.000

1.500

1.000

0.500

0.000 0.000

0.050

0.100

0.150

0.200

0.250

0.300

0.350

` Page | 4
Figure 2

Time (s)

Trends and evaluation All sets of data in my experiment show a value of gravity close to the actual value (9.81m/s2), as can be seen in Figure 1. However, the values for each set are quite sporadic, with the mean of the highest average set at 10.096m/s2, 0.315 higher than the lowest average sets mean 9.781m/s2 at 67cm. That said, the standard deviation of the values is close for each set (see tables). As expected, the longer free falls yield better standard deviation. 20cm has a relatively large deviation of 0.305 m/s2, whilst the highest free fall, 95cm, has precise values for g, with S.D of 0.087 m/s2. This is clearly visible in the error bars of Figure A, where the difference between the vertical bars shrinks from left to right. The average value for gravity from all my raw data is at 9.875 m/s2, which is significantly higher than 9.81 m/s2. Many variables could affect my values for gravity, such as air resistance, friction from the tube and a non-vertical drop. However, these are all variables which would decrease the acceleration. This leads me to believe there was a systematic error, but one which was due to a measuring error faulty light gates or a faulty data logger. Figure 2 verifies the fact there is a systematic error. The line of best fit I plotted with Excel does not intercept at (0, 0), as it should. The graphs equation is y=9.778x +0.0208. The 9.778 stands for the gradient and the acceleration (see SUVAT equation on page 1). 0.0208 corresponds to the value for V-U, meaning the recorded value is approximately 0.02ms-1 higher than its actual value. This systematic error transforms the results, from a very reasonable 9.778ms-2 to a significantly too high 9.875ms-2. An average of 9.778 would be almost perfect, as the real value for gravity is slightly lower than the theoretical one, as it allows for counter forces such as friction.

Although the data seems to be inconsistent, there is a weak correlation as is shown by the line of best fit (Figure 1). It illustrates how the average measurement of acceleration decreases as the free fall distance is increased. The line of best fit shows this. I believe this is because the longer free falls lead to more accurate results. If there was a systematic error in the experiment, the data in which time was measured for longer (i.e. the longer free falls) would be less affected by the error, as it would be a smaller percentage of it when used to calculate gravity. That apart, the values are clearly very variable, as is visible in Figure 1s error bars. My equipment and experimental method had a part to play in this, however, with them leading to more uncertainty in the results. I used my data logger to calculate velocity of the card at points A and B. This meant I could use my initial formula to calculate the acceleration of the card. In that respect, my experiment was valid. I controlled variables, thus increasing internal validity. This meant making sure as little as possible was affecting the acceleration of the card. Not using rulers to guide the card helped the validity of the results, as did measuring velocity as opposed to time and distance (which reduces error as card does not need to be dropped through A at exactly 0m/s). I identified one anomaly in my results. It is highlighted in red in my table for 20cm. Its acceleration seemed far too low at 8.566, especially compared to the other results in that set, which had a low standard deviation of 0.296. 8.566 lay far further out, at 1.I discounted this result from

Page | 5

other calculations, e.g. mean, and it was ignored. It must have been caused by a random error. A possible cause is that I noted down the final velocity figure wrongly from the data logger. I believe the most important cause of uncertainty was the data logger. The length of the free fall was somewhat irrelevant as I did not use displacement (s) to compute acceleration. However, the time and velocities were both recorded with the data logger. Due to recording in 3 s.f, the percentage error of acceleration was on average 0.35%. Overall, I thought the experiment was successful to a degree. A systematic error meant my results were too high, and uncertainty could have been reduced further. Despite this, it was a success in some aspects. The results were relatively reliable in my opinion, and the results got more reliable and precise the larger the free fall, as I hypothesised.

Page | 6

Vous aimerez peut-être aussi