Sources of Experimental Error


Experimental Error

            Errors – or uncertainties in experimental data – can arise in numerous ways. They are inherent in every experiment in some form or another.  Simply the action of making measurements invokes error due to the uncertainty of the last digit. Experimental Error is not a mistake or accident that is made by the experimenter, but rather aspects of the procedure itself that, even if performed correctly, will cause the results to be "off."

Recognizing sources of experimental error is very important not only in analyzing results, but also in designing a procedure.  If sources of experimental error are anticipated beforehand, modifications can often be made to try to minimize these sources of error, and thus obtain more accurate results.  

Blunders (mistakes).

        Blunders (or mistakes) are goofs or accidents that happen during the lab.  Examples would be things such as spilling some of the solid on the balance pan instead of the weighing dish, misreading the instrument, not filling your pipette all the way to the "zero" mark before emptying, forgetting to put a stopper on your sample overnight, etc.  BLUNDERS ARE NOT CONSIDERED SOURCES OF EXPERIMENTAL ERROR. As such, the data collected during these events is invalid and should be discarded.  Therefore, these types of errors should not be mentioned as sources of experimental error.

               

Human error.

            This is often confused with blunders, but is rather different – though one person's human error is another's blunder, no doubt. Really it hinges on the experimenter doing the experiment truly to the best of his ability, but being let down by inexperience. Such errors lessen with practice. They also do not help in the quantitative assessment of error.
 

Instrumental limitations.

            Uncertainties are inherent in any measuring instrument. A ruler, even if as well-made as is technologically possible, has calibrations of finite width; a 25.0 cm3 pipette of grade B accuracy delivers this volume to within 0.06 cm3 if used correctly. A digital balance showing two decimal places can only weigh to within 0.005 g by its very nature and even then only if it rounds the figures to those two places.

            Calibrations are made under certain conditions, which have to be reproduced if the calibrations are to be true within the specified limits. Volumetric apparatus is usually calibrated for 20oC, for example; the laboratory is usually at some other temperature.

            Analogue devices such as thermometers or burettes often require the observer to interpolate between graduations on the scale. Some people will be better at this than others.

Samples smaller than the smallest markings on the instrument are really beyond the scope of what that instrument is designed to measure.  Such readings will have a huge percentage of uncertainty and usually result in inaccurate results.

            These limitations exist; whether they are dominant errors is another matter.
 
Difficulty in Measuring

Some measurements involve some human guesswork and thus, introduce error.  For example:

  • A person pushes "stop" on a stopwatch when a car crosses a line. The delay between the person's hand-eye timing causes error in making this measurement.  
  • The radius of a balloon is measured with a ruler.  Since the balloon is curved, this is hard to measure with a flat ruler. 
  • A person's height is measured by having them stand against a measuring tape on a wall.  The error here is that a line perfectly parallel to the floor must be drawn from the pinnacle of the person's height to the corresponding mark on the wall.  If the line angles up or down, the mark will not "hit the wall" at the correct spot and the measurement will be off.

False Assumptions

Again, these are not really "wrong" methods, but pieces of the method that might not be 100% correct.  Often, these can be formulas that are used or measurements taken of a sample that supposedly represent the whole.  Example are:
  • Using the formula for volume of a sphere to calculate the volume of a balloon.  The balloon may be pretty much a sphere, but is not a perfect sphere.
  • Measuring the thickness of a penny at one point and assuming that it has the same thickness throughout.

Observing the system may cause errors.

            If you have a hot liquid and you need to measure its temperature, you will dip a thermometer into it. This will inevitably cool the liquid slightly. The amount of cooling is unlikely to be a source of major error, but it is there nevertheless.

            The act of observation can cause serious errors in biological systems. Simply handling an animal will cause adrenaline release that will change its biochemistry, for example. The design of biological experiments is not our concern here, but it is a particularly difficult aspect of experimental design.
 

Errors due to external influences.

            Such errors may come from draughts on the balance pan, for example (though this seems pretty close to a blunder), or maybe from impurity in the chemicals used. Again such things are unlikely to be significant in a carefully-designed and executed experiment, but are often discussed by students, again because they are fairly obvious things.
 

Sampling.

            Many scientific measurements are made on populations. This is most obviously true in biology, but even the three values that you (perhaps) get from a titration is a population, albeit rather a small one. It is intuitively understood that the more samples you have from a given population the less the error is likely to be. It is why I do not permit students to be satisfied with two congruent titration figures; I am slightly more convinced by three, and prefer four.

            Related to this are errors arising from unrepresentative samples. Suppose that a chemist wishes to measure the levels of river pollution. The amount of a particular pollutant will depend on the time of day, the season of the year, and so on. So a measurement made at 3 o'clock on a Friday afternoon may be utterly unrepresentative of the mean levels of the pollutant during the rest of the week. It doesn't matter how many samples he takes – if the sampling method is this biased, a true picture of the mean levels of pollutant in the river cannot be obtained. A large population does not of itself ensure greater accuracy.

            The bias in this example is fairly obvious. This is not always so, even to experienced investigators. Sir Ronald Fisher's famous text 'The Design of Experiments' deals with the difficulties of removing bias in biological investigations, and is a work on statistical methods. This degree of analysis is outside our present concerns, though will not be outside yours if you go on to do research in many fields of science.