5.6 Challenges in quantitative measurement

Learning Objectives

  • Identify potential sources of error
  • Differentiate between systematic and random error

For quantitative methods, you should now have some idea about how conceptualization and operationalization work, and you should also know how to assess the quality of your measures. But measurement is sometimes a complex process, and some concepts are more complex than others. Measuring a person’s political party affiliation, for example, is less complex than measuring their sense of alienation. In this section, we’ll consider some of these complexities in measurement.

Systematic error

Unfortunately, measures never perfectly describe what exists in the real world. Good measures demonstrate reliability and validity but will always have some degree of error. Systematic error causes our measures to consistently output incorrect data, usually due to an identifiable process. Imagine you created a measure of height, but you didn’t put an option for anyone over six feet tall. If you gave that measure to your local college or university, some of the taller members of the basketball team might not be measured accurately. In fact, you would be under the mistaken impression that the tallest person at your school was six feet tall, when in actuality there are likely plenty of people taller than six feet at your school. This error seems innocent, but if you were using that measure to help you build a new building, those people might hit their heads!

A less innocent form of error arises when researchers using question wording that might cause participants to think one answer choice is preferable to another. For example, someone were to ask you, “Do you think global warming is caused by human activity?” you would probably feel comfortable answering honestly. But what if someone asked you, “Do you agree with 99% of scientists that global warming is caused by human activity?” Would you feel comfortable saying no, if that’s what you honestly felt? Possibly not. That is an example of a leading question, a question with wording that influences how a participant responds. We’ll discuss leading questions and other problems in question wording in greater detail in Chapter 7.

 

In addition to error created by the researcher, participants can cause error in measurement. Some people will respond without fully understanding a question, particularly if the question is worded in a confusing way. That’s one source of error. Let’s consider another. If we asked people if they always washed their hands after using the bathroom, would we expect people to be perfectly honest? Polling people about whether they wash their hands after using the bathroom might only elicit what people would like others to think they do, rather than what they actually do. This is an example of social desirability bias, in which participants in a research study want to present themselves in a positive, socially desirable way to the researcher. People in your study will want to seem tolerant, open-minded, and intelligent, but their true feelings may be closed-minded, simple, and biased. So, they lie. This occurs often in political polling, which may show greater support for a candidate from a minority race, gender, or political party than actually exists in the electorate.

A related form of bias is called acquiescence bias, also known as “yea-saying.” It occurs when people say yes to whatever the researcher asks, even when doing so contradicts previous answers. For example, a person might say yes to both “I am a confident leader in group discussions” and “I feel anxious interacting in group discussions.” Those two responses are unlikely to both be true for the same person. Why would someone do this? Similar to social desirability, people want to be agreeable and nice to the researcher asking them questions or they might ignore contradictory feelings when responding to each question. Respondents may also act on cultural reasons, trying to “save face” for themselves or the person asking the questions. Regardless of the reason, the results of your measure don’t match what the person truly feels.

Random error

So far, we have discussed sources of error that come from choices made by respondents or researchers. Usually, systematic errors will result in responses that are incorrect in one direction or another. For example, social desirability bias usually means more people will say they will vote for a third party in an election than actually do. Systematic errors such as these can be reduced, but there is another source of error in measurement that can never be eliminated, and that is random error. Unlike systematic error, which biases responses consistently in one direction or another, random error is unpredictable and does not consistently result in scores that are consistently higher or lower on a given measure. Instead, random error is more like statistical noise, which will likely average out across participants.

 

Random error is present in any measurement. If you’ve ever stepped on a bathroom scale twice and gotten two slightly different results, then you’ve experienced random error. Maybe you were standing slightly differently or had a fraction of your foot off of the scale the first time. If you were to take enough measures of your weight on the same scale, you’d be able to figure out your true weight. In social science, if you gave someone a scale measuring depression on a day after they lost their job, they would likely score differently than if they had just gotten a promotion and a raise. Even if the person were clinically depressed, our measure is subject to influence by the random occurrences of life. Thus, social scientists speak with humility about our measures. We are reasonably confident that what we found is true, but we must always acknowledge that our measures are only an approximation of reality.

Humility is important in scientific measurement, as errors can have real consequences. When Matthew DeCarlo was writing the source material for this book, he and his wife were expecting their first child. Like most people, they used a pregnancy test from the pharmacy. If the test said his wife was pregnant when she was not, that would be a false positive. On the other hand, if the test indicated that she was not pregnant when she was in fact pregnant, that would be a false negative. Even if the test is 99% accurate, that means that one in a hundred women will get an erroneous result when they use a home pregnancy test. For them, a false positive would have been initially exciting, then devastating when they found out they were not having a child. A false negative would have been disappointing at first and then quite shocking when they found out they were indeed having a child. While both false positives and false negatives are not very likely for home pregnancy tests (when taken correctly), measurement error can have consequences for the people being measured.

 

Key Takeaways

  • Systematic error may arise from the researcher, participant, or measurement instrument.
  • Systematic error biases results in a particular direction, whereas random error can be in any direction.
  • All measures are prone to error and should interpreted with humility.

 

Glossary

  • Acquiescence bias- when respondents say yes to whatever the researcher asks
  • False negative- when a measure does not indicate the presence of a phenomenon, when in reality it is present
  • False positive- when a measure indicates the presence of a phenomenon, when in reality it is not present
  • Leading question- a question with wording that influences how a participant responds
  • Random error- unpredictable error that does not consistently result in scores that are consistently higher or lower on a given measure
  • Social desirability bias- when respondents answer based on what they think other people would like, rather than what is true
  • Systematic error- measures consistently output incorrect data, usually in one direction and due to an identifiable process

 

Image attributions

question by jambulboy CC-0

mistake by stevepb CC-0

 

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book