Abstract
Is it worth the effort for community colleges to pursue NSF ATE grant funds for small enrollment programs? This manuscript describes our experiences with a program that served 11 students and four local employers in a high-need field. This collaborative writing effort is presented as a grant program postmortem review to share our experiences and review pertinent literature so that others, particularly prospective community college grant recipients and evaluators of those grants, may learn from our experiences. We discussed ways we were able to leverage a small program to our advantage and the size-specific issues that we were unable to resolve.
Keywords: small enrollment programs; program funding; qualitative evaluation; confidentiality; small grants
© 2024 under the terms of the J ATE Open Access Publishing Agreement.
Introduction
The topic of this manuscript is small grant pilot-funded programs. We (the project coordinator/faculty, member of a water treatment program and the project’s evaluator) describe our reflections from a four-year National Science Foundation (NSF) funded Advanced Technological Education (ATE) grant. While earlier literature defines small programs as those that involve fewer than 100 students and are taught by five or fewer instructors [1], ours was considerably smaller than this, serving a total of 11 students (5 in Cohort 1 and 6 in Cohort 2). It was operated entirely by one faculty member who was also the project’s coordinator. Despite its remarkably small enrollment and, thus, small budget, by the end of the grant’s three-year performance period, the Water Quality Technology (WQT) program boasted job placement for 10 of its enrollees.
Furthermore, those students exceeded the state average certification examination pass rate. The grant-funded program enhanced the college’s capacity to offer a degree in WQT, resulted in a complete curriculum, and established relationships with regional employers. Its external evaluation provided formative feedback and summative results that met NSF expectations and were consistent with evaluation standards while staying within budget. The program met its goals and was, by that definition, successful. But the experience was not free from challenges.

WQT Context
In 2018, in response to local water industry concerns about the lack of skilled, certified operators in the region, Pellissippi State Community College (PSCC) applied for and was awarded an ATE grant to launch its new drinking water and wastewater degree program. While many small programs are established as pilots that are intended to be scaled up if successful, WQT was designed to be small to meet local demand for qualified technicians. There is demand for highly skilled workers throughout the US, with many plants competing for certified workers [2]. However, the pass rate for state certification is often low. In Tennessee, for example, pass rates for the state-level certification exam are around 25% [3]. In addition to current available positions, the water industry will likely incur considerable demand over the next decade, with up to 40% of current workers projected to retire [2]. Approximately 25 new positions may open in PSCC’s area in the next five years, using statewide projections [4]. Thus, although there is an urgent demand for certified wastewater operators, projections suggest there will never be a need for large numbers of graduates in a given region, including ours.
The WQT program awards students an Associate of Applied Science Degree upon completing 60 credit hours. The two-year degree educates students with a combination of STEM courses, professional development, general education courses, and nine WQT-specific courses paced over four semesters. To actualize WQT, one full-time faculty member developed, launched, and coordinated the program. She developed and continually refined course curricula, mentored and advised students, established and maintained relationships with local employers, taught program-specific courses, and coordinated student internships and site visits at treatment facilities and, because of this, will be referred to as faculty/coordinator throughout this paper. She also implemented and refined WQT with ongoing input from the evaluator, which was conducted concurrently with WQT delivery. In line with the size of the program, the evaluation was also small. Over the performance period, the evaluator collected data from the 11 students and four employers who also served as internship supervisors. Reports were delivered throughout the four years to serve both formative and outcomes purposes.
Method
In what began as an informal review, it occurred to us that the small program offered unique advantages and also came with what seemed to be several insurmountable challenges. We used a program retrospective review, a collaborative postmortem [5] process intended to identify strengths and weaknesses in the program’s design, implementation, and outcomes. Our intention was to explore the macro implications of the small enrollment program. To do this, we reviewed program notes and evaluation reports and then met to develop a series of themes that seemed to step from the program’s size. We then organized the themes into five domains (see Matrix 1). Two questions for reflection guided our retrospective review:
- In what ways were we able to use the program’s small size to our advantage?
- What were the unresolved issues related to the program’s size?
The set of themes we present in this paper is not exhaustive but reflects our experiences. Nonetheless, while advantages and seemingly insurmountable challenges are tied to each institution of higher education’s (IHE) unique context, we believe many or most will be relevant to other small enrollment grant-funded programs regardless of context, region, or field of study.
Literature on Small Enrollment Programs
While the literature on small enrollment programs is scant [6], we identified some potentially noteworthy trends. Designers and directors of small IHE academic programs will likely agree that there are at least two inherent difficulties associated with academic programs designed for small enrollment. First, it is challenging to compare assessment results since data generated from small enrollment programs are not generalizable [6]. Many IHEs create alternative assessments for small enrollment programs, including using portfolios, evidence-based frameworks, and qualitative indicators [6]. Second, it is difficult to attract and retain staff who are willing to commit to the myriad responsibilities, excessive workloads, and unspoken obligations associated with running a “one-person” program [7], [8], [9].
From an evaluation standpoint, little published guidance may be applied to small enrollment programs. Bamberger et al. describe the Shoestring Evaluation approach as one that attempts to be as “methodologically sound as possible when operating with budget and time constraints” [10]. According to the authors, the approach mitigates problems associated with small-budget evaluations, including random sampling, quality control, and control for evaluator bias. This article and subsequent published works (e.g., Ravillion) make clear that mixed methods approaches are paramount for small-budget evaluations [11]. Moreover, because small evaluations, which tend to involve fewer decision-makers, are less political in nature and support closer working relationships between the evaluator and program staff, they also tend to favor continuous improvement over summative results [12], [13].
Results
WQT’s small size offered advantages or leverage points that we might not have as easily enjoyed on a larger program. While some of these benefits would likely have been achieved in a larger program, we believe—based on previous experiences—that the quality of these issues was enhanced by WQT’s size. At the same time, WQT’s size also presented us with challenges that we were unable to mitigate.
Advantage 1: WQT Made Extensive Use of Individualized Student Supports
WQT used small cohorts to take advantage of the learning community structure, which tends to encourage greater engagement and course pass rates [14], [15]. It also emphasized individualized student attention and support, which have been shown to increase student retention and academic success [16], [17], [18].
WQT’s size made integrating the flipped classroom and other active learning strategies easy. The faculty/ coordinator commonly had students view lectures remotely and was, thus, able to focus on hands-on activities in the classroom and through site visits. This included demonstrating to students how to test water samples and then coaching them as they performed the tests. In addition, the faculty/coordinator created and used targeted strategies outside of regular class requirements to help students understand the relationship between theory and water treatment. For example, students utilized a computer-based modeling program with unique problem-solving scenarios, such as flooding, to help them understand the relationship between classroom theory and real-life situations.
We leveraged WQT’s small size to support an informal learning environment. The faculty/coordinator facilitated and encouraged students to participate in small group study, project-based assignments, and supported test preparation. Students enrolled in WQT benefitted from the small cohort structure because they could develop peer relationships and form study groups with peers taking several of the same courses across disciplines simultaneously. These peer relationships and study groups are likely to have also helped reduce student attrition. For example, one student, an adult who returned to college after another career, was unsure about his abilities to complete WQT. However, he later reported that because of the informal and peer-supported small learning environment, he became more confident and excelled in courses.
WQT’s size allowed for formal and informal individualized attention for each student. The program coordinator met regularly with each student throughout each semester to discuss updates about course progress and areas in which students were struggling. She was familiar with each student’s academic progress, study habits, strengths, support needs, time management and soft skills, and personal issues. Some students encountered difficult personal circumstances while studying WQT and others, at the time of their enrollment, had not established professional or life skills. The faculty/coordinator assumed the responsibility of working with each student to mitigate the effect of these difficulties on students’ academic progress.
WQT’s small size increased students’ opportunities for hands-on site visits to treatment facilities. We believe the small size of WQT ensured that every student was able to visit multiple plants and, thus, increase their exposure to treatment techniques and hands-on procedures (e.g., testing water samples). The faculty/coordinator served as a liaison between facilities and students, scheduling all site visits and internship placements each year. This exposure to various plants allowed students to better understand variations in how plants operate.
Because of the small program size, the faculty/coordinator was able to engage all students in professional trade event opportunities. Beginning in the first year of WQT’s implementation, the faculty/coordinator brought all students to conferences and technical presentations. In addition to attending lectures and networking events, the students were recognized in a Young Professionals Luncheon. As a result, several students secured jobs from connections made during that conference.
Advantage 2: WQT’s Evaluation Emphasized Use for Continuous Improvement
The evaluation was fully integrated into WQT to provide ongoing feedback. A core assumption of the evaluation approach we selected is that in addition to generating compelling evidence that reported on the extent to which WQT met its goals, it provided ongoing access to results that were designed to assist in decision-making [19]. We avoided a checkbox evaluation or one that perfunctorily utilized instrumentation designed for larger, more complicated studies. We used the small size to our advantage by combining a rapid and responsive feedback approach with a thematic framework [20] that would follow each individual’s progress through the program and into employment. The evaluation used mixed methods including, for example, questionnaires, individual interviews, and student assessment data (comparative with the state average, number of attempts to pass, and the pass rate for WQT students).
The small program design permitted the evaluator to maintain connections across data sources. We knew that a qualitative design would be ideal given ongoing communication with each student, the faculty, and the employers, but that the limited budget would not allow for an intensive design such as ethnography [20] or multiple case study analysis design [21]. We simplified this to be able to report by cohort the experiences of each individual using student and internship host questionnaires, regular conversational interviews with the faculty/coordinator, biannual interviews with students and employers, and extant data (e.g., certification test results). Unlike more traditional evaluations that may showcase one type of data or analytic method as the primary indicator of program effectiveness, the WQT evaluation relied on various methods and data sources used systematically, consistently, and collectively to emphasize triangulated findings.
Although qualitative inquiry tends to be labor intensive, its use in this small evaluation supported a rapid data collection and reporting cycle. In response to program needs and limitations articulated during the planning year, the evaluator used a reporting approach that was influenced by rapid evaluation and assessment methods or REAM [22]. The idea was to maintain a balance between speed, usefulness, and credibility of the evaluation results. Through the use of Rapid Response Reports, we were able to summarize the inquiry’s purpose, present responses, and major themes, and offer recommendations and next steps for the evaluation within one week of each data collection event. This ensured that the WQT evaluation users received actionable feedback in a timely manner. It also created a dialogic process between the evaluator and faculty/coordinator, leading to new or alternative interpretations of results and questions for follow-up inquiry.
The small program size supported full integration of the evaluation and supported evaluation use. Because programs that involve fewer stakeholders and that have a local orientation are inherently less political, they tend to be easier to make full use of evaluation results [12]. Our discussions—between the evaluator and faculty/coordinator—concentrated on the results’ implications for program enhancement and program delivery. For example, in situations where there was variation in student responses about the usefulness of hands-on laboratory experience, we conducted follow-up inquiries to better understand how that element of WQT benefitted some students and not others. As another example, qualitative results provided compelling evidence that incoming plant job applicants were underprepared for leadership roles. We used feedback from recently hired employees and plant leaders to secure the creation of a leadership course for WQT students. In fact, because the evaluation study included few outcome indicators throughout WQT’s performance period, we gave priority to formative feedback over outcome findings.
Challenge 1: Key Players Were Overburdened and Stretched
As noted previously, WQT’s budget covered a small program evaluation and personnel costs for one faculty member who was also the coordinator. Particularly for the faculty/coordinator, WQT’s support for one person was taxing. We describe two themes that were associated with stretched resources.
Limited resources and support stretched the single faculty member who also served as WQT’s program coordinator. While more extensive programs would distribute teaching and lesson development across several faculty/coordinators, WQT required the program coordinator to design, create, and teach all nine WQT courses. This amounted to four to five unique courses every semester, each with its own preparation, and, because of the niche nature of the WQT subject matter, identifying resources since traditional textbooks and other formal teaching resources do not exist. Moreover, the faculty/coordinator scheduled all courses, all of which were face-to-face. Finally, although most academic appointments do not include extensive course advising, professional and academic career support, or job placement, as WQT’s single point of contact, she was overwhelmed by student requests. These roles and responsibilities collectively amounted to regular schedules of 12- to 14-hour days.
Requests for student visits overburdened local treatment plants. While not directly resulting from the grant-funded program’s size, the limited number of local plants largely dictated WQT’s size. During the course of WQT delivery, some plant directors noted that they were overburdened by requests for student site visits or to serve as hosts for student field experiences and requested that the number of visits be reduced.
Challenge 2: The Evaluation Confronted Reporting Challenges
Although measures were taken to balance credibility and privacy, public reports of the evaluation’s findings were not robust partly because of threats to confidentiality. As described above, WQT’s evaluation combined rapid evaluation and mixed methods to gauge each student’s progression through the program, engagement in field experiences, certification examination scores, and employment in the field. However, because of the small enrollment, participants were identifiable. Confidentiality is an ethical principle in evaluation, especially when reporting sensitive data, which may include employment and assessment activities. Ideally, through deidentification, aggregation of larger data sets, individuals’ opinions about the program, its usefulness and value, and ways it may be improved may be offered to evaluation audiences with little prospect of reprisal or discomfort. With only a few students matriculated in some WQT sections and only four participating employers and all of them having familiarity with each other, individual participants could, potentially, have been identified by each other and by the WQT faculty/coordinator through evaluation reports of findings.
The team decided that because of confidentiality concerns, outcomes results would not be shared publicly. Because of the small context and just a few students and employers participating in the evaluation, we quickly realized that participants were identifiable through evaluation reports. Descriptions of water treatment facilities, facility personnel, and internship focus could easily link any of the facilities and, thus, employers to specific students. While the evaluator ensured that participants understood that they could be identified by WQT grant leadership, they were also assured that their confidentiality would be maintained beyond reports to the college. Thus, we were limited in our presentation of outcomes (i.e., those that presented a complete case from program entry to employment) to the public and the sponsoring agency. Moreover, our decision to frame this manuscript around WQT as an example of a small program instead of providing rich examples from the evaluation stems from this decision.
Limited options were available for sharing formative feedback without revealing participants’ identities. The small number of participants—as few as three students in a cohort—also meant that evaluation reports designed to share formative feedback exposed participants’ identities. For example, during an early implementation feedback session, after the evaluator had completed interviews with four students, results suggested that three students felt completely on track and were fully supported. However, the fourth student’s comments indicated they did not feel they were receiving sufficient attention. The faculty/coordinator immediately identified the student based on previous conversations and the relationship with the student. As mentioned above, in a larger study, an evaluator would use deidentification and aggregation techniques to protect identities.
Challenge 3: The Small Size Contributed to Institutional Risk and Threats to Sustainability
In reflecting on the institutional inputs and investments, we believe the organizations involved undertook risks. We describe two themes that were associated with the likelihood of WQT sustainability. First, WQT’s continuation depended on the commitment of just one person.
WQT’s continued implementation and sustainability depended on one faculty/coordinator’s commitment. As illustrated throughout this manuscript, one key staff person assumed all roles and responsibilities for WQT. We believe that had she resigned or become unable to work for an extended period, finding a suitable replacement would have been improbable. With graduating students earning greater salaries and working more comfortable shifts in fields like WQT, it would be unlikely to attract a pool of qualified and committed candidates from industry or academia.
Likewise, WQT’s continuation depended on the ongoing participation of just a few local treatment plants, which was sometimes tenuous. The requests of the WQT program continually taxed the few local utilities. Even with careful planning and coordination with host facilities, because they were overstretched from site visit requests, some facilities declared that they would only provide student tours once a year or less. Had one treatment plant withdrawn from participation
Matrix 1. Summary of Main Themes

Conclusion
While the grant funding ended, the WQT program continues to serve regional water treatment facilities. Moreover, the program continues to experience the challenges and opportunities we describe in this manuscript. Through our journey of co-authoring this manuscript, we reflected on the inputs and actions that led to WQT’s small-scale successes. Our premise is that the WQT grant-funded program enabled students from various life experiences to obtain employment, employers had access to credentialed prospective employees and had their needs heard by the college, and curricula and frameworks were developed to support the program. It was also a rewarding experience for the faculty/coordinator and the evaluator. To realize these successes, we understand that it required a tremendous amount of commitment and support from the college, the dean, and employers, and it took more than a small amount of our commitment and flexibility from the program’s implementers. For example, the faculty/coordinator was willing to engage fully in discussions about evaluation results and use them to improve the program. Moreover, she often posed follow-up questions that the evaluator used to collect and describe additional information. We believe the working relationships and supports were absolutely paramount for achieving WQT’s successes.
Nevertheless, as we illustrated, small enrollment programs are vulnerable in a strategic sense because their continuation and sustainability are dependent on the commitment of just a handful of key players. If even one of WQT’s key players had become sidelined, WQT’s trajectory would have been very different. One person’s prolonged absence or resignation should not have the potential to effectively kill a program. Moreover, running a small enrollment program may be complicated by college restrictions about the minimum number of matriculants for a course to be offered, requiring the one-person to stretch beyond what is otherwise reasonable to meet students’ needs and keep them on track to graduate on time. And, as we noted in the unresolved challenges, the evaluation of particularly small programs has the potential to reveal participant identities, which, in turn, may make them vulnerable to retaliation. While we do not believe that was the case in WQT, unfortunately, it is not an unfathomable issue. Finally, because of the small sample, the evaluator was unable to utilize standard techniques such as disaggregation or inferential statistics. In the corpus of traditional evaluation designs, this inability has the strong potential to reduce the study’s ability to make strong assertions about equity and outcomes.
We believe that all aspects of small enrollment programs benefit from emphasizing interpersonal relationships. This includes selecting an evaluation design that prioritizes depth over breadth. We recommend that small enrollment ATE grant recipients and evaluators consider suitable small study designs and models such as case studies, ethnographic evaluations, and participatory evaluations. The successful implementation of a small enrollment program and its subsequent integration into the IHE requires balancing ideals—organizational, human capital, pedagogical, and methodological—with the practical realities of budget, workforce demand (and need), and time limitations. We offer those considering a small enrollment grant-funded program or who are already launching one to consider the following questions for reflection.
- How likely is it that your IHE will be able to sustain the small enrollment program beyond the funding period while minimizing institutional risks associated with one-person program staffing, threats of competition, and overburdening local partnerships?
- How likely is it that your program’s external evaluation will be able to satisfactorily ensure the protection of program participants’ confidentiality while also generating and communicating formative feedback for evaluation users and outcomes results for broader circulation?
- Is your program’s external evaluation design able to credibly and compellingly report on program outputs, outcomes, and program equity?
- What assets and opportunities does your IHE have that will likely boost the chances of a small enrollment grant-funded program’s success?
Finally, we are sure that there must be a greater appreciation of the importance of small enrollment programs, especially for high-needs technical fields that are perennially difficult to staff. The foundational themes we present in this manuscript may be used as a starting point for further inquiry. More specifically, the ATE community would benefit from a systematic research study of small enrollment program challenges, assets, and needs. Such a study could include a survey of purposefully selected small enrollment programs combined with follow-up qualitative data to learn about the breadth and depth of small enrollment programs.
Acknowledgments This work was supported by the National Science Foundation (NSF) under award 1800789.
Disclosures The authors declare no conflicts of interest.
[1] J. W. Way, “A Manual for Small Program Evaluation,” unpublished.
[2] American Water Works Association, “State of the Water Industry 2023,” 2023. [Online]. Available: https://www.awwa.org/Portals/0/Awwa/Professional%20Development/2020SOTWIreport.pdf
[3] Tennessee Department of Environment and Conservation, “Dataset on State Certification Pass Rates, 2019-2023,” 2023.
[4] Fleming Training Center, “Data on Demographics of Certified Water Treatment Plant Operators, 2023,” 2023.
[5] K. Sturges and C. Howley, “Building capacity in state education agencies: Using organizational theory to guide technical assistance,” Journal of Organizational Theory in Education, September, no. 3, 1-19, 2018.
[6] E. Huber, “Investigating the praxis of evaluating small-scale learning and teaching projects in higher education”. Macquarie University, 03-Oct-2018 [Online]. Available: https://figshare.mq.edu.au/articles/thesis/Investigating_the_praxis_of_evaluating_small-scale_learning_and_teaching_projects_in_higher_education/19435472/1.
[7] M. Walia, N. Sharma, and G. Lata, “Challenges in Retaining Faculty in New and Upcoming Medical Colleges: A Faculty Member’s Perspective,” Journal of Datta Meghe Institute of Medical Sciences, vol. 18, no. 1, 2023.
[8] G. L. Peirce, S. P. Desselle, J. R. Draugalis, A. R. Spies, T. S. Davis, and M. Bolino, “Identifying Psychological Contract Breaches to Guide Improvements in Faculty Recruitment, Retention, and Development,” Am J Pharm Educ, vol. 76, no. 6, 2012.
[9] F. Nausheen, M. M. Agarwal, and J. J. Estrada, et al, “A Survey of Retaining Faculty at a New Medical School: Opportunities, Challenges and Solutions,” BMC Med Educ, vol. 18, no. 223, Sep 2018.
[10] M. Bamberger, J. Rugh, M. Church, and L. Fort, “Shoestring Evaluation: Designing Impact Evaluations under Budget, Time and Data Constraints,” American Journal of Evaluation, vol. 25m no 1, 5-37, 2004, doi: https://doi.org/10.1177/109821400402500102.
[11] M. Ravallion, “Can We Trust Shoestring Evaluations?,” The World Bank Economic Review, vol. 28, no. 3, 2014, 413–431, doi: https://doi.org/10.1093/wber/lht016.
[12] M. Q. Patton, Essentials of Utilization Focused Evaluation. Thousand Oaks, CA, USA: Sage Publications, 2021.
[13] M. E. Baron, “Designing Internal Evaluation for a Small Organization with Limited Resources,” New Directions for Evaluation, vol 2011, no. 132, Dec. 2011, doi: https://doi.org/10.1002/ev.398.
[14] E. P. Bettinger, A. Boatman, and B. T. Long, “Student Supports: Developmental Education and Other Academic Programs,” The Future of Children, vol. 23, no. 11, 2013, doi: https://doi.org/10.1353/foc.2013.0003>
[15] C. M. Zhao and G. D. Kuh, “Adding Value: Learning Communities and Student Engagement,” Research in Higher Education, vol. 45, no. 2, 2004.
[16] E. N. Waiwaiole, E. M. Bohlig, and K. J. Massey, “Student Success: Identifying High-Impact Practices,” New Directions for Community Colleges, vol. 2016, no. 175, 45–55.
[17] S. R. Johnson and F. K. Stage, “Academic Engagement and Student Success: Do High-Impact Practices Mean Higher Graduation Rates?,” The Journal of Higher Education, vol. 89, no. 5, 753–781, 2018, doi: https://doi.org/10.1080/00221546.2018.1441107.
[18] Q. Shi, R. Cresiski, S. Thanki, and L. Navarrete, L. “Unlocking Undergraduate Student Success: A Study of High-Impact Practices in a Comprehensive and Diverse College,” Journal of Postsecondary Student Success, vol. 2, no. 4, 83–107, 2023.
[19] D. L. Stufflebeam and G. Zhang, The CIPP Evaluation Model: How to Evaluate for Improvement and Accountability. New York, New York, USA: The Guilford Press, 2017.
[20] B. Harry, K. Sturges, and J. K. Klingner, “Mapping the Process: An Exemplar of Process and Challenge in Grounded Theory Analysis,” Educational Researcher, vol. 2005, no. 34, 3-13, 2005.
[21] R. E. Stake, Multiple Case Study Analysis. New York, New York, USA: The Guilford Press, 2006.
[22] M. McNall and P. G. Foster-Fishman, “Methods of Rapid Evaluation, Assessment, and Appraisal,” American Journal of Evaluation, vol. 28, no. 2, 151-168, 2007.