12.2 Survey Considerations

Learning Objectives

Learners will be able to…

  • Distinguish between retrospective and prospective studies
  • Distinguish between cross-sectional and longitudinal surveys
  • Identify the strengths and limitations of each approach to collecting survey data, including the timing of data collection and how the questionnaire is delivered to participants

As we discussed in the previous chapter, surveys are versatile and can be shaped and suited to most topics of inquiry. While that makes surveys a great research tool, it also means there are many options to consider when designing your survey. The two main considerations for designing surveys are (1) time-related and (2) administration-related.

Time-Related Considerations

Cross-sectional surveys: A snapshot in time

Think back to the last survey you took. Did you respond to the questionnaire once or did you respond to it multiple times over a long period? Cross-sectional surveys are administered only one time. Chances are the last survey you took was a cross-sectional survey—a one-shot measure of a sample using a questionnaire. And chances are if you are conducting a survey to collect data for your project, it will be cross-sectional simply because it is more feasible to collect data once than multiple times.

Let’s take a very recent example, the COVID-19 pandemic. Enriquez and colleagues (2021)[1] wanted to understand the impact of the pandemic on undocumented college students’ academic performance, attention to academics, financial stability, mental and physical health, and other factors. In cooperation with offices of undocumented student support at eighteen campuses in California, the researchers emailed undocumented students a few times from March through June of 2020 and asked them to participate in their survey via an online questionnaire. Their survey presents an compelling look at how COVID-19 worsened existing economic inequities in this population.

Strengths and weaknesses of cross-sectional surveys

Cross-sectional surveys take advantage of many of the strengths of survey design. They are easy to administer since you only need to measure your participants once, which makes them highly suitable for research projects. Keeping track of participants for multiple measures takes time and energy, two resources always under constraint in student projects. Conducting a cross-sectional survey simply requires collecting a sample of people and getting them to fill out your questionnaire—nothing more.

That convenience comes with a tradeoff. When you only measure people at one point in time, you can miss a lot. The events, opinions, behaviors, and other phenomena that such surveys are designed to assess don’t generally remain the same over time. Because nomothetic causal explanations seek a general, universal truth, surveys conducted a decade ago do not represent what people think and feel today or twenty years ago. In research projects, this weakness is often compounded by the use of availability sampling, which further limits the generalizability of the results in student research projects to other places and times beyond the sample collected by the researcher. Imagine generalizing results on the use of telehealth in social work prior to the COVID-19 pandemic or managers’ willingness to allow employees to telecommute. Both as a result of shocks to the system—like COVID-19—and the linear progression of cultural, economic and social change—like human rights movements—cross-sectional surveys can never truly give us a timeless causal explanation. In our example about undocumented students during COVID-19, you can say something about the way things were in the moment that you administered your survey, but it is difficult to know whether things remained that way for long after you administered your survey or describe patterns that go back far in time.

Of course, just as society changes over time, so do people. Because cross-sectional surveys only measure people at one point in time, they have difficulty establishing cause-and-effect relationships for individuals because they cannot clearly establish whether the cause came before the effect. If your research question were about how school discipline (our independent variable) impacts substance use (our dependent variable), you would want to make that any changes in our dependent variable, substance use, came after changes in school discipline. That is, if your hypothesis is that school discipline causes increases in substance use, you must establish that school discipline came first and increases in substance use came afterwards. However, it is perhaps just as likely that increased substance use might cause increases in school discipline. If you sent a cross-sectional survey to students asking them about their substance use and disciplinary record, you would get back something like “tried drugs or alcohol 6 times” and “has been suspended 5 times.” You could see whether similar patterns existed in other students, but you wouldn’t be able to tell which was the cause or the effect.

Because of these limitations, cross-sectional surveys are limited in how well they can establish whether a nomothetic causal relationship is true or not. Surveys are still a key part of establishing causality. But they need additional help and support to make causal arguments. That might come from combining data across surveys in meta-analyses and systematic reviews, integrating survey findings with theories that explain causal relationships among variables in the study, as well as corroboration from research using other designs, theories, and paradigms. Scientists can establish causal explanations, in part, based on survey research. However, in keeping with the assumptions of postpositivism, the picture of reality that emerges from survey research is only our best approximation of what is objectively true about human beings in the social world. Science requires a multi-disciplinary conversation among scholars to continually improve our understanding.

 

Longitudinal surveys: Measuring change over time

One way to overcome this sometimes-problematic aspect of cross-sectional surveys is to administer a longitudinal survey. Longitudinal surveys enable a researcher to make observations over some extended period of time. There are several types of longitudinal surveys, including trend, panel, and cohort surveys. We’ll discuss all three types here, along with retrospective surveys, which fall somewhere in between cross-sectional and longitudinal surveys.

The first type of longitudinal survey is called a trend survey. The main focus of a trend survey is, perhaps not surprisingly, trends. Researchers conducting trend surveys are interested in how people in a specific group change over time. Each time researchers gather data, they survey different people from the identified group because they are interested in the trends of the whole group, rather than changes in specific individuals. Let’s look at an example.

The Monitoring the Future Study is a trend study that described the substance use of high school children in the United States. It’s conducted annually by the National Institute on Drug Abuse (NIDA). Each year, the NIDA distributes surveys to children in high schools around the country to understand how substance use and abuse in that population changes over time. Perhaps surprisingly, fewer high school children reported using alcohol in the past month than at any point over the last 20 years—a fact that often surprises people because it cuts against the stereotype of adolescents engaging in ever-riskier behaviors. Nevertheless, recent data also reflected an increased use of e-cigarettes and the popularity of e-cigarettes with no nicotine over those with nicotine. By tracking these data points over time, we can better target substance abuse prevention programs towards the current issues facing the high school population.

Unlike trend surveys, panel surveys require the same people participate in the survey each time it is administered. As you might imagine, panel studies can be difficult and costly. Imagine trying to administer a survey to the same 100 people every year, for 5 years in a row. Keeping track of where respondents live, when they move, and when they change phone numbers takes resources that researchers often don’t have. However, when the researchers do have the resources to carry out a panel survey, the results can be quite powerful. The Youth Development Study (YDS), administered from the University of Minnesota, offers an excellent example of a panel study.

Since 1988, YDS researchers have administered an annual survey to the same 1,000 people. Study participants were in ninth grade when the study began, and they are now in their thirties. Several hundred papers, articles, and books have been written using data from the YDS. One of the major lessons learned from this panel study is that work has a largely positive impact on young people (Mortimer, 2003).[2] Contrary to popular beliefs about the impact of work on adolescents’ school performance and transition to adulthood, work increases confidence, enhances academic success, and prepares students for success in their future careers. Without this panel study, we may not be aware of the positive impact that working can have on young people.

Another type of longitudinal survey is a cohort survey. In a cohort survey, the participants have a defining characteristic that the researcher is interested in studying. The same people don’t necessarily participate from year to year, but all participants must meet whatever categorical criteria fulfill the researcher’s primary interest. Common cohorts that researchers study include people of particular generations or people born around the same time period, graduating classes, people who began work in a given industry at the same time, or perhaps people who have some specific historical experience in common. An example of this sort of research can be seen in Lindert and colleagues (2020)[3] work on healthy aging in men. Their article is a secondary analysis of longitudinal data collected as part of the Veterans Affairs Normative Aging Study conducted in 1985, 1988, and 1991.

 

Retrospective or Prospective?

==The following is cut from after the longitudinal survey section; it will need some rewriting & will need to have info re: prospective studies in it as well…This hsa been revised and supplemented by TRMC.==

In a retrospective survey, participants are asked to report events from the past to understand outcomes in the present. This can be a cost-effective way to explore potential correlations in the population.  Many of the initial studies linking smoking to lung cancer were retrospective studies of people with lung cancer. Because they had histories of smoking, there was some evidence that smoking might be linked to lung cancer. However, there were also critiques that retrospective studies could not prove causality.

One major limitation of retrospective studies is the possibility that people’s recollections of their past may be faulty. Imagine that you are participating in a survey that asks you to respond to questions about your levels of stress. There is a good chance that you would be able to provide a pretty accurate response of how you felt a month ago. Now let’s imagine that the researcher wants to know how much stress you experienced a year ago. How likely is it that you will remember accurately? Will your responses be as accurate as they might have been if your data had been collected one year ago and then again one month ago and asked you how you were feeling in the present?

The main limitation with retrospective surveys are that they are not as reliable as cross-sectional or longitudinal surveys. That said, retrospective surveys are a feasible way to collect longitudinal data when the researcher only has access to the population once, and for this reason, they may be worth the drawback of greater risk of bias and error in the measurement process.

Prospective survey design studies participants are enrolled at a baseline time and changes are observed over time. In prospective studies, participants can report on their experiences in the present and do not have to rely on potential faulty memories. Prospective studies usually have fewer potential sources of bias than retrospective studies, but are more time and resource intensive.

WHERE DOES THE FOLLOWING BELONG?

Because quantitative research seeks to build nomothetic causal explanations, it is important to determine the order in which things happen. When using survey design to investigate causal relationships between variables in a research question, longitudinal surveys are certainly preferable because they can track changes over time and therefore provide stronger evidence for cause-and-effect relationships. As we discussed, the time and cost required to administer a longitudinal survey can be prohibitive, and most survey research in the scholarly literature is cross-sectional because it is more feasible to collect data once. Well designed cross-sectional surveys can provide important evidence for a causal relationship, even if it is imperfect. Once you decide how many times you will collect data from your participants, the next step is to figure out how to get your questionnaire in front of participants.

Strengths and weaknesses of longitudinal surveys

All three types of longitudinal surveys share the strength that they permit a researcher to make observations over time. Whether a major world event takes place or participants mature, researchers can effectively capture the subsequent potential changes in the phenomenon or behavior of interest. This is the key strength of longitudinal surveys—their ability to establish temporality needed for nomothetic causal explanations. Whether your project investigates changes in society, communities, or individuals, longitudinal designs improve on cross-sectional designs by providing data at multiple points in time that better establish causality.

Of course, all of that extra data comes at a high cost. If a panel survey takes place over ten years, the research team must keep track of every individual in the study for those ten years, ensuring they have current contact information for their sample the whole time. Consider this study which followed people convicted of driving under the influence of drugs or alcohol (Kleschinsky et al., 2009).[4] It took an average of 8.6 contacts for participants to complete follow-up surveys, and while this was a difficult-to-reach population, researchers engaging in longitudinal research must prepare for considerable time and expense in tracking participants. Keeping in touch with a participant for a prolonged period of time likely requires building participant motivation to stay in the study, maintaining contact at regular intervals, and providing monetary compensation. Panel studies are not the only costly longitudinal design. Trend studies need to recruit a new sample every time they collect a new wave of data at additional cost and time.

Cross-sectional surveys are simply the most convenient and feasible option. Nevertheless, social work researchers with more time to complete their studies use longitudinal surveys to understand causal relationships that they cannot manipulate themselves. A researcher could not ethically experiment on participants by assigning a jail sentence or relapse, but longitudinal surveys allow us to systematically investigate such sensitive phenomena ethically. Indeed, because longitudinal surveys observe people in everyday life, outside of the artificial environment of the laboratory (as in experiments), the generalizability of longitudinal survey results to real-world situations may make them superior to experiments, in some cases.

Table 12.1 summarizes these three types of longitudinal surveys.

Table 12.1 Types of longitudinal surveys
Sample type Description
Trend Researcher examines changes in trends over time; the same people do not necessarily participate in the survey more than once.
Panel Researcher surveys the exact same sample several times over a period of time.
Cohort Researcher identifies a defining characteristic and then regularly surveys people who have that characteristic.

 

 

Administration-Related Considerations

Self-administered questionnaires

If you are planning to conduct a survey for your research project, chances are you have thought about how you might deliver your survey to participants. If you don’t have a clear picture yet, look back at your work from Chapter 11 on the sampling approach for your project. How are you planning to recruit participants from your sampling frame? If you are considering contacting potential participants via phone or email, perhaps you want to collect your data using a phone or email survey attached to your recruitment materials. If you are planning to collect data from students, colleagues, or other people you most commonly interact with in-person, maybe you want to consider a pen-and-paper survey to collect your data conveniently. As you review the different approaches to administering surveys below, consider how each one matches with your sampling approach and the contact information you have for study participants. Ensure that your sampling approach is feasible conduct before building your survey design from it. For example, if you are planning to administer an online survey, make sure you have email addresses to send your questionnaire or permission to post your survey to an online forum.

Surveys are a versatile research approach. Survey designs vary not only in terms of when they are administered but also in terms of how they are administered. One common way to collect data is in the form of self-administered questionnaires. Self-administered means that the research participant completes the questions independently, usually in writing. Paper questionnaires can be delivered to participants via mail or in person whenever you see your participants. It is common for academic researchers to administer surveys in large social science classes, so perhaps you have taken a survey that was given to you in-person during undergraduate classes. These professors were taking advantage of the same convenience sampling approach that student projects often do. If everyone in your sampling frame is in one room, going into that room and giving them a quick paper survey to fill out is a feasible and convenient way to collect data. Availability sampling may involve asking your sampling frame to complete your study during when they naturally meet—colleagues at a staff meeting, students in the student lounge, professors in a faculty meeting—and self-administered questionnaires are one way to take advantage of this natural grouping of your target population. Try to pick a time and situation when people have the downtime needed to complete your questionnaire, and you can maximize the likelihood that people will participate in your in-person survey. Of course, this convenience may come at the cost of privacy and confidentiality. If your survey addresses sensitive topics, participants may alter their responses because they are in close proximity to other participants while they complete the survey. Regardless of whether participants feel self-conscious or talk about their answers with one another, by potentially altering the participants’ honest response you may have introduced bias or error into your measurement of the variables in your research question.

Researchers who aim to generalize their results should distribute self-administered surveys via the mail or electronically. Survey researchers who deliver their surveys via postal mail often provide some advance notice to respondents about the survey to get people thinking and preparing to complete it. They may also follow up with their sample a few weeks after their survey has been sent out. This can be done not only to remind those who have not yet completed the survey to please do so but also to thank those who have already returned the survey. Most survey researchers agree that this sort of follow-up is essential for improving mailed surveys’ return rates (Babbie, 2010). [6] Other helpful tools to increase response rate are to create an attractive and professional survey, offer monetary incentives, and provide a pre-addressed, stamped return envelope. These are also effective for other types of surveys.

It is increasingly common for research projects to use email and other modes of online delivery like social media to collect responses to a questionnaire. Researchers like online delivery for many reasons. It’s quicker than knocking on doors in a neighborhood for an in-person survey or waiting for mailed surveys to be returned. It’s cheap, too. There are many free tools like Google Forms, Qualtrics and Survey Monkey (which includes a premium option). Those affiliated with a university, may have access to commercial research software like Redcap or Qualtrics which provide much more advanced tools for collecting survey data than free options. Online surveys can take advantage of the advantages of computer-mediated data collection by playing a video before asking a question, tracking how long participants take to answer each question, and making sure participants don’t fill out the survey more than once (to name a few examples. Moreover, survey data collected via online forms can be exported for analysis in spreadsheet software like Google Sheets or Microsoft Excel or statistics software like SPSS, R, or JASP (R and JASP are no-cost and open-access). While the exported data still need to be checked before analysis, online distribution saves you the trouble of manually inputting every response a participant writes down on a paper survey into a computer to analyze.

The process of collecting data online depends on your sampling frame and approach to recruitment. If your project plans to reach out to people via email to ask them to participate in your study, you should attach your survey to your recruitment email. You already have their attention, and you may not get it again (even if you remind them). Think pragmatically. You will need access to the email addresses of people in your sampling frame. You may be able to piece together a list of email addresses based on public information (e.g., faculty email addresses are on their university webpage, practitioner emails are in marketing materials). In other cases, you may know of a pre-existing list of email addresses to which your target population subscribes (e.g., all undergraduate students in a social work program, all therapists at an agency), and you will need to gain the permission of the list’s administrator recruit using the email platform. Other projects will identify an online forum in which their target population congregates and recruit participants there. For example, your project might identify a Facebook group used by students in your social work program or practitioners in your local area to distribute your survey. Of course, you can post a survey to your personal social media account (or one you create for the survey), but depending on your question, you will need a detailed plan on how to reach participants with enough relevant knowledge about your topic to provide informed answers to your questionnaire.

Many of the suggestions that were provided earlier to improve the response rate of hard copy questionnaires also apply to online questionnaires, including the development of an attractive survey and sending reminder emails. One challenge not present in mail surveys is the spam filter or junk mail box. While people will at least glance at recruitment materials sent via mail, email programs may automatically filter out recruitment emails so participants never see them at all. While the financial incentives that can be provided online differ from those that can be given in person or by mail, online survey researchers can still offer completion incentives to their respondents. Over the years, I’ve taken numerous online surveys. Often, they did not come with any incentive other than the joy of knowing that I’d helped a fellow social scientist do their job. However, some surveys have their perks. One survey offered a coupon code to use for $30 off any order at a major online retailer and another allowed the opportunity to be entered into a lottery with other study participants to win a larger gift, such as a $50 gift card or a tablet computer.

One area in which online surveys are less suitable than mail or in-person surveys is when your target population includes individuals with limited, unreliable, or no access to the internet or individuals with limited computer skills. For these groups, an online survey is inaccessible. At the same time, online surveys offer the most feasible way to collect data anonymously. By posting recruitment materials to a Facebook group or list of practitioners at an agency, you can avoid collecting identifying information from people who participated in your study. For studies that address sensitive topics, online surveys also offer the opportunity to complete the survey privately (again, assuming participants have access to a phone or personal computer). If you have the person’s email address, physical address, or met them in-person, your participants are not anonymous, but if you need to collect data anonymously, online tools offer a feasible way to do so.

The best way to collect data using self-administered questionnaires depends on numerous factors. The strengths and weaknesses of in-person, mail, and electronic self-administered surveys are reviewed in Table 12.2.

Table 12.2 Strengths and weaknesses of delivery methods for self-administered questionnaires
In-person Mail Electronic
Cost Depends: it’s easy if your participants congregate in an accessible location; but costly to go door-to-door to collect surveys Depends: it’s too expensive for unfunded projects but a cost-effective option for funded projects Strength: it’s free and easy to use online survey tools
Time Depends: it’s easy if your participants congregate in an accessible location; but time-consuming to go door-to-door to collect surveys Weakness: it can take a while for mail to travel Strength: delivery is instantaneous
Response rate Strength: it can be harder to ignore someone in person Weakness: it is easy to ignore junk mail, solicitations Weakness: it’s easy to ignore junk mail; spam filter may block you
Privacy Weakness: it is very difficult to provide anonymity and people may have to respond in a public place, rather than privately in a safe place Depends: it cannot provide true anonymity as other household members may see participants’ mail, but people can likely respond privately in a safe place Strength: can collect data anonymously and respond privately in a safe place
Reaching difficult populations Strength: by going where your participants already gather, you increase your likelihood of getting responses Depends: it reaches those without internet, but misses those who change addresses often (e.g., college students) Depends: it misses those who change phone or emails often or don’t use the internet; but reaches online communities
Interactivity Weakness: paper questionnaires are not interactive Weakness: paper questionnaires are not interactive Strength: electronic questionnaires can include multimedia elements, interactive questions and response options
Data input Weakness: researcher inputs data manually Weakness: researcher inputs data manually Strength: survey software inputs data automatically

Ultimately, you must make the best decision based on its congruence with your sampling approach and what you can feasibly do. Decisions about survey design should be done with a deep appreciation for your study’s target population and how your design choices may impact their responses to your survey.

Group Administration

add some info here

Interviewer-administered questionnaires

There are some cases in which it is not feasible to provide a written questionnaire to participants, either on paper or digitally. In this case, the questionnaire can be administered verbally by the researcher to respondents. Rather than the participant reading questions independently on paper or digital screen, the researcher reads questions and answer choices aloud to participants and records their responses for analysis. Another word for this kind of questionnaire is an interview schedule. It’s called a schedule because each question and answer is posed in the exact same way each time.

Consistency is key in quantitative interviews. By presenting each question and answer option in exactly the same manner to each interviewee, the researcher minimizes the potential for the interviewer effect, which encompasses any possible changes in interviewee responses based on how or when the researcher presents question-and-answer options. Additionally, in-person surveys may be video recorded and you can typically take notes without distracting the interviewee due to the closed-ended nature of survey questions, making them helpful for identifying how participants respond to the survey or which questions might be confusing.

Quantitative interviews can take place over the phone or in-person. Phone surveys are often conducted by political polling firms to understand how the electorate feels about certain candidates or policies. In both cases, researchers verbally pose questions to participants. For many years, live-caller polls (a live human being calling participants in a phone survey) were the gold-standard in political polling. Indeed, phone surveys were excellent for drawing representative samples prior to mobile phones. Unlike landlines, cell phone numbers are portable across carriers, associated with individuals as opposed to households, and do not change their first three numbers when people move to a new geographical area. For this reason, many political pollsters have moved away from random-digit phone dialing and toward a mix of data collection strategies like texting-based surveys or online panels to recruit a representative sample and generalizable results for the target population (Silver, 2021).[5]

It is easy and even socially acceptable to abruptly hang up on an unwanted caller asking you to participate in a survey, and given the high incidence of spam calls, many people do not pick up the phone for numbers they do not know. We will discuss response rates in greater detail at the end of the chapter. One of the benefits of phone surveys is that a person can complete them in their home or a safe place. At the same time, a distracted participant who is cooking dinner, tending to children, or driving may not provide accurate answers to your questions. Phone surveys make it difficult to control the environment in which a person answers your survey. When administering a phone survey, the researcher can record responses on a paper questionnaire or directly into a computer program. For large projects in which many interviews must be conducted by research staff, computer-assisted telephone interviewing (CATI) ensures that each question and answer option are presented the same way and input into the computer for analysis.

Interview schedules must be administered in such a way that the researcher asks the same question the same way each time. While questions on self-administered questionnaires may create an impression based on the way they are presented, having a researcher pose the questions verbally introduces additional variables that might influence a respondent. Controlling one’s wording, tone of voice, and pacing can be difficult over the phone, but it is even more challenging in-person because the researcher must also control their non-verbal expressions and behaviors that may bias survey respondents. Even a slight shift in emphasis or wording may bias the respondent to answer differently. As we’ve mentioned earlier, consistency is key with quantitative data collection—and human beings are not necessarily known for their consistency. But what happens if a participant asks a question of the researcher? Unlike self-administered questionnaires, quantitative interviews allow the participant to speak directly with the researcher if they need more information about a question. While this can help participants respond accurately, it can also introduce inconsistencies between how the survey administered to each participant. Ideally, the researcher should draft specifications, which are sample responses researchers might provide to participants if they are confused on certain survey items. The strengths and weaknesses of phone and in-person quantitative interviews are summarized in Table 12.3 below.

Table 12.3 Strengths and weaknesses of delivery methods for quantitative interviews
In-person Phone
Cost Depends: it’s easy if your participants congregate in an accessible location; but costly to go door-to-door to collect surveys Strength: phone calls are free or low-cost
Time Weakness: quantitative interviews take a long time because each question must be read aloud to each participant Weakness: quantitative interviews take a long time because each question must be read aloud to each participant
Response rate Strength: it can be harder to ignore someone in person Weakness: it is easy to ignore or hang up on unwanted or unexpected calls
Privacy Weakness: it is very difficult to provide anonymity and people will have to respond in a public place, rather than privately in a safe place Depends: it is difficult for the researcher to control the context in which the participant responds, which might be private or public, safe or unsafe
Reaching difficult populations Strength: by going where your participants already gather, you increase your likelihood of getting responses Weakness: it is easy to ignore or hang up on unwanted or unexpected calls or just hang up mid way through the interviews.
Interactivity Weakness: interview schedules are kept simple because questions are read aloud Weakness: interview schedules are kept simple because questions are read aloud
Data input Weakness: researcher inputs data manually Weakness: researcher inputs data manually

 

Key Takeaways

  • Time is a factor in determining what type of survey a researcher administers; cross-sectional surveys are administered at one time, and longitudinal surveys are at multiple points in time.
  • Retrospective surveys offer some of the benefits of longitudinal research while only collecting data once but may be less reliable.
  • Self-administered questionnaires may be delivered in-person, online, or via mail.
  • Interview schedules are used with in-person or phone surveys (a.k.a. quantitative interviews).
  • Each way to administer surveys comes with benefits and drawbacks.

Exercises

TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

In this section, we assume that you are using a cross-sectional survey design. But how will you deliver your survey? Recall your sampling approach you developed in Chapter 10. Consider the following questions when evaluating delivery methods for surveys.

  • Can you attach your survey to your recruitment emails, calls, or other contacts with potential participants?
  • What contact information (e.g., phone number, email address) do you need to deliver your survey?
  • Do you need to maintain participant anonymity?
  • Is there anything unique about your target population or sampling frame that may impact survey research?

Imagine you are a participant in your survey.

  • Beginning with the first contact for recruitment into your study and ending with a completed survey, describe each step of the data collection process from the perspective of a person responding to your survey. You should be able to provide a pretty clear timeline of how your survey will proceed at this point, even if some of the details eventually change.

TRACK 2 (IF YOU AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

You are interested in understanding more about the needs of unhoused individuals in rural communities, including how these needs vary based on demographic characteristics and personal identities.

Assume that you are using a cross-sectional survey design to answer your research question. How will you deliver your survey? Consider the following questions when evaluating delivery methods for surveys.

  • Can you attach your survey to your recruitment emails, calls, or other contacts with potential participants?
  • What contact information (e.g., phone number, email address) do you need to deliver your survey?
  • Do you need to maintain participant anonymity?
  • Is there anything unique about your target population that may impact survey research?

  1. Enriquez , L. E., Rosales , W. E., Chavarria, K., Morales Hernandez, M., & Valadez, M. (2021). COVID on Campus: Assessing the Impact of the Pandemic on Undocumented College Students. AERA Open. https://doi.org/10.1177/23328584211033576
  2. Mortimer, J. T. (2003). Working and growing up in America. Cambridge, MA: Harvard University Press.
  3. Lindert, J., Lee, L. O., Weisskopf, M. G., McKee, M., Sehner, S., & Spiro III, A. (2020). Threats to Belonging—Stressful Life Events and Mental Health Symptoms in Aging Men—A Longitudinal Cohort Study. Frontiers in psychiatry11, 1148.
  4. Kleschinsky, J. H., Bosworth, L. B., Nelson, S. E., Walsh, E. K., & Shaffer, H. J. (2009). Persistence pays off: follow-up methods for difficult-to-track longitudinal samples. Journal of studies on alcohol and drugs70(5), 751-761.
  5. Silver, N. (2021, March 25). The death of polling is greatly exaggerated. FiveThirtyEight. Retrieved from: https://fivethirtyeight.com/features/the-death-of-polling-is-greatly-exaggerated/
definition

License

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book