Abstract
Evaluation surveys are widely used by projects within the National Science Foundation’s Advanced Technological Education (ATE) program to understand the reach, effectiveness, and impact of project activities. As a result, project staff and evaluators often aim to maximize survey response rates, since higher rates can improve data quality, representativeness, and the usefulness of findings for learning and decision-making. This study examined whether incentives were associated with higher response rates in a web-based survey involving members of the ATE community. Using a stratified randomized experimental design, 2,943 individuals were assigned to one of three conditions: a guaranteed $5 incentive, entry into an iPad lottery, or no incentive. Analyses examined differences by incentive condition, ATE affiliation, frequency of EvaluATE use, and respondent role. Overall, participants who were offered an incentive were significantly more likely to complete the survey than those who were not (30% vs. 21%; p < .01). Incentives were associated with significantly higher response rates among non-ATE participants (27% vs. 17%; p < .01), as well as among both more-frequent users of EvaluATE resources (39% vs. 26%; p < .01) and less-frequent users (26% vs. 19%; p < .05). Differences among ATE participants overall and ATE evaluators were not statistically significant. Together, these findings provide practical, evidence-informed guidance for evaluators and project staff designing survey strategies to increase participation, broaden representation across groups, and strengthen the usefulness of evaluation data in NSF-funded and other STEM education contexts.
Keywords: evaluation, participant incentives, survey incentives, incentives, research on evaluation, STEM evaluation
© 2026 under the terms of the J ATE Open Access Publishing Agreement
Introduction
The use of surveys is common in project-level evaluations within the National Science Foundation’s Advanced Technological Education (ATE) grant program. In fact, a survey of ATE evaluators found that surveys were the most commonly used data collection method, with 92% of evaluators in 2022 (n=100) [1] and 94% in 2024 (n=67) [2] reporting their use. However, the rate at which surveys are answered plays a critical role in the quality and utility of evaluation data collected [3]. Incentives are one strategy evaluators use to increase response rates. This study investigates whether offering an incentive is associated with response rates to a biennial evaluation survey administered by EvaluATE, targeting individuals inside and outside the ATE community. Findings offer practical insights for evaluators and project teams conducting evaluations within the ATE program and similar grant-funded initiatives.
Background
Surveys are a foundational tool in evaluation, commonly used to assess participant experiences, measure changes in knowledge or behavior, and gather feedback from diverse interest holders [4]. Their flexibility, ease of administration, scalability, and cost-efficiency make them particularly valuable in multi-site or large-scale evaluations, such as those funded by the National Science Foundation (NSF). They are also commonly employed in educational or training settings to measure participant outcomes, satisfaction, engagement, and changes in knowledge or behavior [4].
However, evaluation scholars have long emphasized that surveys must be carefully designed and implemented to reduce bias and strengthen validity [3]. Achieving a representative response rate is one way to enhance data quality and ensure evaluation findings can adequately inform program decisions and demonstrate impact [5]. Although there is no universal benchmark for an adequate response rate for evaluation surveys, acceptable rates are highly context-dependent. For example, expectations for response rates may be higher in small programs and lower in large-scale efforts. Recent research indicates that average online survey response rates are approximately 45% [3,4]. In light of this, evaluators continue to test strategies for improving both response rates and data quality in real-world settings.
One commonly used approach to improving response rates is offering incentives for participation. Research shows that incentives, whether monetary or material, can boost survey response, but their effectiveness varies across contexts and respondent groups [6]. For example, Coryn et al. [6] found that an online survey of American Evaluation Association members achieved higher response rates when participants were randomly assigned to receive one of four incentive types (monetary, lottery, charitable donation, or none) compared to prior years (39.66% vs. 25.18%), with lottery and token incentives performing best. Other research indicates that, overall, monetary incentives tend to outperform non-monetary gifts, and offering prepaid incentives typically yields stronger participation than either promised incentives or lotteries [7]. Recent studies also indicate that prepaid or guaranteed incentives yield higher participation than probabilistic rewards (e.g., lotteries) with diminishing returns, particularly as incentive amounts increase [8,9].
Some research, however, suggests that when respondents find the topic relevant to them or have some intrinsic motivation, external incentives may have less impact, and large incentives could even reduce intrinsic motivation [10,11]. More recent work similarly finds that substantial monetary incentives do not reliably increase participation when intrinsic motivation is already high [12,13]. Consequently, modest but certain incentives may be sufficient to improve response rates, particularly among professional audiences with baseline motivation to participate. For example, one study of professionals found that modest incentives, such as cash, gift cards, or charitable donations, significantly increased participation compared with no incentive, suggesting that smaller rewards may be sufficient when respondents have a preexisting motivation to engage [14].
Evaluators must weigh the financial and administrative costs of incentives against potential gains in response quality and representativeness. Despite the growing use of this tactic, few studies have examined the effectiveness of incentives within evaluation practice in STEM education or NSF-funded environments, leaving a gap that this study aimed to address.
The NSF ATE context
The ATE program is funded by the National Science Foundation to promote the education of technicians for high-technology fields. ATE supports curriculum development; professional development of college faculty and secondary school teachers; career pathways; and other activities. The program also invites research proposals that advance the knowledge base related to technician education.
EvaluATE
EvaluATE is the evaluation resource center for the ATE program. It provides just-in-time resources, webinars, newsletters, blogs, and formal and informal educational opportunities about evaluation in an open-access format. Additionally, EvaluATE conducts an annual survey of NSF ATE grantees, as well as research on evaluation within the ATE community.
EvaluATE’s Biennial External Evaluation Survey
As part of its evaluation efforts, EvaluATE’s independent evaluators administer a web-based external evaluation survey to collect data aligned with key evaluation questions, several of which are based on the four-level Kirkpatrick Model for assessing training effectiveness: reaction, learning, behavior, and impact. The Biennial External Evaluation Survey examines audience engagement, satisfaction with resources, gains in evaluation knowledge and attitudes, changes in evaluation practices, and overall improvements in evaluation quality within the Advanced Technological Education (ATE) program. Both the 2018 and 2020 surveys yielded a 10% response rate (n = 328 and 298, respectively), with the majority of respondents identifying as ATE-affiliated (66% in 2018; 61% in 2020).
Given the low response rates in previous years, we, as members of EvaluATE’s external and internal evaluation teams, sought to explore strategies for improving survey participation. As a project, we recognized that limited response rates constrained our ability to understand EvaluATE’s impact fully and to make informed, project-level decisions. This limitation was especially concerning because our community is large and diverse, encompassing evaluators, project staff, prospective grantees, researchers, grants managers, and other community college professionals—each bringing varying levels of evaluation expertise and distinct reasons for engaging with EvaluATE. Without broader representation, survey findings reflected only a fraction of this community, leaving the potential for important perspectives to be underrepresented.
Research Questions
We identified the following research questions aimed at describing the impact of incentives on survey response rates:
- Is offering a $5 incentive or an entry to win an iPad associated with higher survey response rates compared to offering no incentive?
- Are differences in survey response rates between incentive conditions associated with (i.e., moderated by) prior engagement status (ATE vs. non-ATE), frequency of use (more- vs. less-frequent users), professional role (evaluator vs. non-evaluator), and combinations of these factors (e.g., ATE status × frequency)?
Methods
The present study was conducted in conjunction with the administration of EvaluATE’s Biennial External Evaluation Survey via Qualtrics over a three-week period starting October 17, 2022. Since 2012, The Rucks Group, EvaluATE’s external evaluator, has administered the survey every two years to individuals who have attended or used EvaluATE’s services and resources. The survey includes questions to assess how well EvaluATE is reaching its audiences, how satisfied people are with its activities and resources, and how its work is building evaluation knowledge, shaping evaluation practices, and improving evaluation quality within the ATE program.
Design
This study used a stratified randomized experimental design in which individuals were randomly assigned to one of three incentive conditions to examine the effect of incentives on survey participation. The study protocol was reviewed and determined exempt by the Western Michigan University Institutional Review Board (WMU IRB).
We constructed a sampling frame representing the target population which included all individuals in EvaluATE’s contact database who, at any point in the previous four years, had been associated with an active ATE grant or had attended an EvaluATE workshop or webinar (N = 2,943). Before assignment into incentive conditions, registration and attendance records were utilized to group individuals according to three characteristics: professional role (evaluator vs. non-evaluator), ATE involvement (ATE vs. non-ATE), and engagement level based on their past-year engagement with EvaluATE. Engagement scores were categorized into three levels (high, medium, and low). For analysis, these engagement categories were collapsed into a two-level frequency variable: more-frequent users (more than 1 event in the past 12 months) and less-frequent users (1 or 0 events). Individuals were then randomly assigned to one of the three incentive conditions using stratified randomization to ensure that each condition included a comparable distribution of roles, ATE affiliation, and engagement levels.
To examine the association between incentive condition and survey completion, the 2,943 individuals from the list were randomly assigned to one of three conditions: a $5 cash incentive (n = 1,012), an entry into a lottery for an iPad (n = 1,012), or no incentive (n = 1,011). Recipients were not informed of the alternative conditions to which others were assigned. Everyone in the sampling frame was sent a survey invitation containing the survey link, a brief description of the survey’s purpose, assurances of confidentiality, notification that no personally identifiable information would be shared with EvaluATE personnel, and information regarding their designated incentive for completing the survey (if applicable).
Of the 2,943 individuals who received an invitation, 1,083 (37%) opened the email. Among these, 317 had participated in an ATE project or proposal within the past four years, while 766 had not. Among participants with prior ATE involvement, 230 were principal or co-principal investigators (PIs/Co-PIs), 86 were evaluators, and one participant had missing role data.
Grouping Variables
Three participant characteristics were used to create subgroupings for analysis: prior involvement in the ATE program, frequency of EvaluATE use in the past year, and professional role. In addition to examining each variable separately, combinations of these characteristics (e.g., ATE involvement × frequency of use) were analyzed to explore potential moderating effects.
- ATE involvement – Participants were classified as ATE if they had been involved with an ATE project or proposal during the prior four years or classified as non-ATE if they had not.
- Frequency of EvaluATE use – Frequency was determined by the number of EvaluATE events attended in the past 12 months. More-frequent users attended more than one event, while less-frequent users attended one or no events.
- Professional role – For individuals who had been involved in an ATE project within the past four years, their record also indicated their primary role—either as an evaluator or as a non-evaluator (including PIs and co-PIs).
Rationale for subgroup analysis
These variables were selected because prior ATE involvement, recent engagement with EvaluATE, and professional role may influence baseline response rates and moderate the association between incentive condition and survey participation. For example, individuals with prior ATE involvement or high engagement with EvaluATE may have higher baseline responsiveness due to established connections, whereas those without such connections may be more influenced by incentives. Similarly, evaluators may be more likely to respond regardless of incentives due to professional norms and familiarity with surveys.
Statistical Analyses
We examined the association between incentive condition (i.e., $5 cash, iPad lottery, or no incentive) and survey response rates, calculating rates using only those who opened the email as the denominator since participants could only learn about the incentive after opening the invitation. Preliminary analyses indicated no statistically significant difference in response rates between the $5 cash and iPad lottery conditions; therefore, these conditions were combined into a single incentive group for subsequent analyses to increase power for comparisons with the no-incentive condition. Fisher’s exact tests were conducted for each primary grouping variable (i.e., ATE involvement, frequency of EvaluATE use, and professional role). This test was selected because some subgroups were small, and it provides accurate results for two-by-two comparisons even with limited sample sizes. Importantly, statistical significance in these subgroup analyses depends not only on the size of the percentage-point difference, but also on the number of observations contributing to each comparison. Accordingly, some subgroup contrasts showed relatively large percentage-point differences but did not reach p < .05 because the subgroup sample sizes, and therefore the cell counts used in the Fisher’s exact tests, were small. For subgroup combinations that resulted in larger contingency tables (more than two categories per variable), chi-square tests were used instead, because they allow comparisons across multiple categories that Fisher’s test is not well-suited to handle. However, because some of these combined subgroupings contained sparse cells, the chi-square results should be interpreted with caution, since low counts reduce their reliability.
Results
Overall Response Rates
Across all respondents who opened the email invitation (n = 1,083), the overall survey response rate was 27%. Preliminary analyses showed no statistically significant difference in response rates between the $5 cash and iPad lottery conditions; accordingly, these conditions were combined for the subgroup analyses reported below. Participants in the combined incentive conditions ($5 cash or iPad lottery; n = 737) had a higher response rate (30%) than those in the no-incentive condition (n = 346; 21%), a statistically significant nine-percentage-point difference (p < .01, Fisher’s exact test; see Table 1). It is important to note that the 2022 Biennial External Evaluation Survey response rate is notably higher than prior Biennial External Evaluation surveys (around 10% in 2018 and 2020) because the 2022 calculation is based on those who opened the invitation, whereas earlier rates were calculated using all who received it.
Response Rates by ATE Involvement
When examined by ATE involvement, participants with prior ATE experience (n = 317) responded at a rate of 37% in the incentive conditions compared to 32% in the no-incentive condition, a non-significant 5-point difference. Among non-ATE participants (n = 766), the response rate was 27% with incentives versus 17% without, a 10-point difference that was statistically significant (p < .01).
Response Rates by Frequency of EvaluATE Use
Analyses by frequency of EvaluATE use indicated that more-frequent users (n = 347) responded at a rate of 39% in the incentive conditions and 26% without incentives, a statistically significant 13-point difference (p < .01). Less-frequent users (n = 736) had a 26% response rate with incentives compared to 19% without, a smaller but statistically significant 7-point difference (p < .05).
Response Rates by ATE Role (Evaluators vs. Non-Evaluators)
Among ATE participants, evaluators (n = 86) had a 50% response rate in the incentive conditions compared to 31% without, a 19-point difference. Although this difference was sizable, it did not reach statistical significance by Fisher’s exact test, likely because the evaluator subgroup was relatively small and therefore provided limited statistical power. In contrast, ATE non-evaluators (n = 230) had identical response rates (32%) in both the incentive and no-incentive conditions.
Analysis of Moderating Effects
Although incentives were associated with higher response rates overall and in several specific subgroups, chi-square tests indicated that these differences did not vary significantly across subgroups, suggesting no statistically significant moderation effects.
ATE Involvement x Frequency of EvaluATE Use
When subgroup combinations were examined, ATE more-frequent users (n = 49) had a response rate of 68% with incentives compared to 53% without, a 14-point difference that was not statistically significant. ATE less-frequent users (n = 268) showed a smaller, non-significant difference (31% vs. 28%). Non-ATE more-frequent users (n = 298) exhibited a statistically significant 12-point difference (34% vs. 22%; p < .05), and non-ATE less-frequent users (n = 468) had a significant 9-point difference (22% vs. 14%; p < .05).
ATE Role × Frequency of EvaluATE Use
Further analysis of ATE involvement by role and frequency showed that several of these subgroup comparisons were based on especially small numbers of respondents and should therefore be interpreted descriptively. ATE evaluators who were more-frequent users (n = 26) had a 67% response rate with incentives and 50% without, a 17-point difference that was not statistically significant. ATE evaluators who were less-frequent users (n = 60) had a 43% response rate with incentives and 22% without, a 21-point difference that was not statistically significant. ATE non-evaluators who were more-frequent users (n = 22) had a 67% response rate with incentives and 57% without, a 10-point difference that was not statistically significant. Finally, ATE non-evaluators who were less-frequent users (n = 208) showed no difference in response rates between the incentive (28%) and no-incentive (29%) conditions. The lack of statistical significance in these smaller subgroups may reflect small sample sizes rather than a lack of association.
Table 1. Survey response rates with and without incentives, overall and by subgroup a
| BIENNIAL EXTERNAL EVALUATION SURVEY Respondent Groupings | No Incentive | Incentive | Difference | Fisher’s exact test sig | Chi Square |
|---|---|---|---|---|---|
| ALL RESPONDENTS (n=1083) | 21% | 30% | 9% | < .01 | |
| ATE (n=317) | 32% | 37% | 5% | ns | NS |
| Non-ATE (766) | 17% | 27% | 10% | < .01 | |
| More Frequent Users (n=347) | 26% | 39% | 13% | < .01 | NS |
| Less Frequent Users (n=736) | 19% | 26% | 7% | < .05 | |
| ATE – More frequent users (n=49) | 53% | 68% | 14% | ns | NS |
| ATE – Less frequent users (n=268) | 28% | 31% | 4% | ns | |
| Non ATE – More frequent users (n=298) | 22% | 34% | 12% | < .05 | |
| Non ATE – Less frequent users (n=468) | 14% | 22% | 9% | < .05 | |
| ATE – Evaluators (n=86) | 31% | 50% | 19% | ns | NS |
| ATE – Non Evaluators (n=230) | 32% | 32% | 0% | ns | |
| ATE – Evaluators – More frequent users (n=26) | 50% | 67% | 17% | ns | NS |
| ATE – Evaluators – Less frequent users (n=60) | 22% | 43% | 21% | ns | |
| ATE – Non Evaluators – More frequent users (n=22) | 57% | 67% | 10% | ns | |
| ATE – Non Evaluators – Less frequent users (n=208) | 29% | 28% | -1% | ns |
a Percentages represent the proportion of individuals within each subgroup who completed the survey. “Incentive” combines the $5 cash and iPad lottery conditions; “No incentive” is the control (no-incentive) condition. “Difference” is the percentage-point difference (Incentive – No incentive). Fisher’s exact test p-values reflect the association between incentive (any vs. none) and survey completion within each subgroup; “ns” = non-significant (p ≥ .05). Chi-square statistics test whether incentive effects differed significantly across subgroups. Because several subgroup analyses were based on small cell counts, some relatively large percentage-point differences did not reach p < .05 and should be interpreted cautiously. Bolded rows indicate statistically significant incentive effects for that subgroup based on Fisher’s exact test.
Summary of Key Results
- Incentives were associated with a 9-percentage-point higher overall response rate compared to no incentive (30% vs. 21%, p < .01).
- Non-ATE participants showed a statistically significant 10-point higher response rate with incentives compared to those without an incentive (27% vs. 17%, p < .01); ATE participants had a smaller, non-significant 5-point difference.
- More-frequent users had a statistically significant 13-point higher response rate with incentives (39% vs. 26%, p < .01); less-frequent users had a smaller but significant 7-point difference (26% vs. 19%, p < .05).
- Among ATE evaluators, incentives were associated with a 19-point higher response rate (50% vs. 31%), but the difference was not statistically significant, likely because this subgroup was small. ATE non-evaluators showed no difference between conditions.
- Statistically significant subgroup differences were found for non-ATE more-frequent users (+12 points, p < .05) and non-ATE less-frequent users (+9 points, p < .05).
- No statistically significant moderation effects were detected by chi-square tests, indicating that subgroup differences in the association between incentives and response rates were not statistically reliable.
Discussion
Main Findings
This study demonstrated that modest incentives were positively associated with survey participation, with overall response rates approximately 10 percentage points higher than in the no-incentive condition. These findings are consistent with prior research showing that incentives can increase survey responses, although the strength of this association varied across respondent subgroups.
Incentives appeared to have a particularly strong association with non-ATE participants, suggesting that incentives may be especially useful for individuals with weaker ties to the ATE community. At the same time, incentives were associated with significantly higher response rates among both more-frequent and less-frequent users of EvaluATE resources, indicating that the benefit of incentives was not limited to less-frequent users. By contrast, highly connected subgroups—particularly ATE non-evaluators—derived little or no benefit, indicating that when individuals are already highly engaged or motivated by professional norms, external incentives may not be necessary.
Taken together, these findings suggest that low-value but immediate rewards may be sufficient to promote response rates for low-stakes surveys among individuals with at least some connection to the sponsoring organization. Incentives are particularly valuable for broadening representation by increasing responses from individuals with weaker ties to ATE, such as non-ATE participants, but they should be combined with other strategies to ensure balanced participation across stakeholder groups.
Implications for ATE & Similar Grant-Funded Contexts
These observations carry specific implications for survey administration in ATE and similar grant-funded contexts. Small, immediate incentives enhanced participation in professional communities, suggesting that larger rewards may not be necessary to secure buy-in, perhaps due to the intrinsic motivation these respondents have to serve the field of which they are a part, aligning with findings by Seshadri and colleagues [14]. After this study, we also experimented with other incentive strategies, though not under the premise of a formal research study. For example, in the 2024 Biennial External Evaluation Survey, we refined our inclusion criteria to invite only individuals who had connected with EvaluATE in the past two years rather than the past four, similar to a strategy suggested by Wu, Zhao, and Fils-Aime [15]. This reduced the overall survey population but also lowered the nonresponse rate. In a distinct survey administered to evaluators connected to EvaluATE, we replaced a $25 virtual gift card incentive with free access to a self-paced online evaluation course. This approach aimed to tap into evaluators’ intrinsic motivation, particularly given their strong ties to EvaluATE and the ATE community, and it received positive feedback from respondents.
Weighing Financial and Administrative Costs Against the Benefits
Incentives should be considered as one part of a broader survey strategy. Their effectiveness depends not only on type and size but also on perceived value, immediacy, and cultural or professional norms. When paired with thoughtful outreach and survey design, modest, immediate rewards can help increase participation and ensure perspectives from underrepresented groups are captured. Yet, incentives are not a one-size-fits-all solution. Evaluators and project teams must carefully weigh the financial and administrative costs of offering incentives against the potential benefits of higher participation and broader representation.
Before deciding whether and how to use incentives, we apply our practical experiences to encourage administrators and survey sponsors to consider the following questions before offering an incentive. See Table 2 for examples of questions we recommend survey administrators consider prior to offering an incentive for survey data collection.
Table 2. Questions for survey administrators to consider prior to offering survey
| Questions to consider regarding administration: | Questions to consider regarding survey recipients: |
|
|
Limitations
In terms of generalizing the findings of this study, several limitations should be noted. First, because the study was conducted within the NSF ATE community—a relatively close-knit, evaluation-oriented community of practice—the ability to extend these findings to other populations is limited. Second, survey invitations did not emphasize the incentive in the email subject line, meaning responses reflect only those motivated to open the email; including incentive information might have increased the open rate but also risked triggering spam filters. In addition, no data were collected on why participants chose to respond or not, leaving interpretations about motivation necessarily inferential.
Conclusion
This study provides evidence that modest incentives can meaningfully improve survey participation in NSF ATE and related contexts, particularly among individuals with weaker ties to the ATE community, such as non-ATE participants, and may serve as a practical strategy for strengthening representation in evaluation findings. Additional research in other settings could clarify the generalizability of these results and identify the most effective combinations of incentive type and messaging.
Acknowledgements. This work was supported by the National Science Foundation (NSF) under award 2332143.
Disclosures. The authors report no conflicts of interest.
[1] S. Marshall, L. Becho, and M. López, The State of Evaluation in the ATE Program: 2023, EvaluATE, May 2023. [Online]. Available: https://evalu-ate.org/wp-content/uploads/2023/05/The-State-of-Evaluation-in-the-ATE-Program-2023.pdf
[2] M. López, L. Becho, S. Marshall, and A. Hooks Singletary, The State of ATE Evaluation: 2025, EvaluATE, 2025. [Online]. Available: https://evalu-ate.org/miscellaneous/2025-state-ate-eval/
[3] B. Holtom, Y. Baruch, H. Aguinis, and G. A. Ballinger, “Survey response rates: Trends and a validity assessment framework,” Human Relations, vol. 75, no. 8, pp. 1560–1584, 2022.
[4] E. Taylor-Powell and C. Hermann, Collecting Evaluation Data: Surveys. Madison, WI, USA: University of Wisconsin–Extension, 2000.
[5] R. M. Groves, F. J. Fowler Jr., M. P. Couper, J. M. Lepkowski, E. Singer, and R. Tourangeau, Survey Methodology, 2nd ed. Hoboken, NJ, USA: John Wiley & Sons, 2009.
[6] C. L. Coryn et al., “Material incentives and other potential factors associated with response rates to internet surveys of American Evaluation Association members: Findings from a randomized experiment,” American Journal of Evaluation, vol. 41, no. 2, pp. 277–296, 2020.
[7] E. Deutskens, K. de Ruyter, M. Wetzels, and P. Oosterveld, “Response rate and response quality of internet-based surveys: An experimental study,” Marketing Letters, vol. 15, pp. 21–36, 2004.
[8] J. P. Décieux, S. Zinn, and A. Ette, “Effects of changing the incentive strategy on panel performance: Experimental evidence from a probability-based online panel of refugees,” Survey Research Methods, vol. 19, no. 2, 2025.
[9] C. Ochoa and M. Revilla, “Willingness to participate in in-the-moment surveys triggered by online behaviors,” Behavior Research Methods, vol. 55, 2023.
[10] M. Wenemark, U. Boman, and C. Lundholm, “Applying motivation theory to achieve increased response rates, respondent satisfaction and data quality,” Journal of Official Statistics, vol. 27, no. 2, pp. 279–295, 2011, doi: 10.5167/uzh-47886.
[11] M. Wenemark, A. Vernby, and A. Lindahl Norberg, “Can incentives undermine intrinsic motivation to participate in epidemiologic surveys?” Scandinavian Journal of Public Health, vol. 38, no. 7, pp. 756–759, 2010, doi: 10.1177/1403494810373131.
[12] E. R. Stevens, C. M. Cleland, and A. Shunk, “Evaluating strategies to recruit health researchers to participate in online survey research,” BMC Medical Research Methodology, vol. 24, 2024.
[13] P. Cabrera-Álvarez and P. Lynn, “Benefits of increasing the value of respondent incentives during the course of a longitudinal mixed-mode survey,” Quality & Quantity, 2025, doi: 10.1080/13645579.2024.2443630.
[14] S. Seshadri, S. Mukhopadhyay, and R. Sarathy, “Small-business executives’ online survey response intentions: The effects of incentives and survey length,” Small Business Institute Journal, vol. 18, no. 1, pp. 14–28, 2022, doi: 10.53703/001c.32575.
[15] M. J. Wu, K. Zhao, and F. Fils-Aime, “Response rates of online surveys in published research: A meta-analysis,” Computers in Human Behavior Reports, vol. 7, p. 100206, Aug. 2022, doi: 10.1016/J.CHBR.2022.100206.