Abstract
Research shows formative assessments substantially strengthen learning and support summative assessment/evaluation practices. These practices are not widely applied in ATE’s professional development (PD) efforts. This study focuses on participant teachers’ assessment involvement to increase student learning and enhance outcome evaluations. We surveyed all principal investigators of ATE projects in 2022 who applied assessments in their 2021 PD efforts (N=70). Findings show that a minority of PD efforts apply formative assessment practices to strengthen PD outcomes or meet ATE’s evaluation specifications. Assessment practices were most prevalent for summative purposes at the close of PD activity; a large majority assessed teachers’ interest and learning in the PD and their intentions to use and teach what was learned on return to their classrooms. A third or less followed up to assess outcomes in teachers’ schools. Similarly, thirty percent or less addressed matters of context at any stage of the PD efforts, and a few, 11 percent, followed up to assess the context in the schools. Concomitantly, the findings show where and how attention to formative assessment in the PD learning process can increase teacher involvement in assessment practices, making PD instruction more effective and strengthening outcome evaluations in participant teachers’ home classrooms.
Keywords: formative assessment, summative assessment, evaluation, formative evaluation, summative evaluation, professional development
© 2024 under the terms of the J ATE Open Access Publishing Agreement
Introduction
This study focuses on the nature and use of formative and summative assessment in professional development (PD) projects in the Advanced Technological Education (ATE) program. Through a special section of the annual ATE PI survey, we surveyed P.I.s whose grants included providing PD to learn when and how assessments are conducted, and the purposes served. The survey items were based on a framework we developed using logic modeling and the literature on evaluation and assessment. Our goal was to understand current purposes and practices, particularly related to the role of teacher participants in developing the PD, learning about assessment, and reporting data on their student-level outcomes. This research was a COVID adaptation of our project Formative Assessment for ATE 2: a workshop to develop assessment and evaluation practices. Our long-term project team aims to effectively engage with PD providers to strengthen PD outcomes through strong assessment and evaluation practices. Based on the literature and our findings, the integration of formative assessment and evaluation has the potential to sharply increase learning gains for participant teachers and their students.
Background
The United States faces stiff international competition in emerging twenty-first-century STEM markets and fields [1]. To be successful in this competitive environment requires the U.S. to develop a resilient and agile workforce [2]. Increasing the resilience and agility of workers requires attention to building learning structures, scaffolding, and continuing educational opportunities to help workers learn and take agency for their learning.
The ATE program aims to help meet those needs by developing workforce technicians, most often taught in community college and technician education settings. PD, also called professional learning, delivers instruction to teachers in those workforce settings. As such, it is integral to workforce development efforts, occurs in all workforce development education settings, and involves a significant proportion of education budgets. In ATE in 2015 (the last P.I. survey where this was asked), PD was 22 percent ($14M) of the $64M annual budget. In 2021, when the ATE budget was $75M, 39 percent of grantees reported doing PD (see Table S14).
These PD efforts are intended to increase student knowledge, skill, and interest in technology and improve the technology workforce. Figure 1 represents the strategy employed by NSF through ATE to engage PD as a tool for workforce development in a basic theory of change model—that is, an articulated process that makes visible the embedded ideas about how actions create impacts [3]. As Figure 1 shows, NSF funds the ATE program, which distributes those funds to grantees to attract students to STEM areas and engage them in quality education so they can improve the workforce. The number of grantees that engage in PD varies by year, but typically half deliver PD, and these efforts reach many community colleges and secondary education institutions [4].
The value added from PD comes first from the number of faculty/teachers directly reached in PD workshops. Second, and more significant, its value derives from its multiplier effects whenever those teachers pass that new knowledge or skill on to their students. Thus, a single PD instructor who provides learning to teacher participants could conceivably increase learning among hundreds if not thousands of students taught by those teachers.
To understand the impact of this important aspect of the ATE program, the solicitation focuses on summative evaluation. Those expectations are specified in the ATE guidelines for proposals [5]:
- Demonstrate use in the classrooms.
- Demonstrate sustainable changes in practice of participating faculty and teachers.
- Measure changes in students’ perceptions of technical careers.
- Measure changes in student learning outcomes.
- Demonstrate an increase in qualified technicians for the industry.
Points a-e have been mapped onto Figure 1 to demonstrate where they occur in the theory of change related to the PD and ATE process and goals. All occur after the PD workshop experience, focusing on actions and behaviors in participants’ classrooms and in industries that will employ their students. Collecting data on points a-e requires some preparation and action before, during, at the end, and after the PD Providing robust data on learning (b and d) requires some assessment.
Data from previous studies of PD offered by ATE projects [6-9] indicates variation in preparation for, compliance with, and quality of data related to those guidelines. Summary findings from those four studies report:
Before the PD, most participant teachers agreed, when recruited, to provide follow-up feedback on the impact of the PD on their instruction. Fewer, about 40%, agreed to give feedback on their students’ learning [9].
- During the PD, about a third of participants received instruction on student assessment and tools and protocols for gathering and reporting on student impacts [9].
- At the end of the PD program (i.e., at the end of the PD instruction activity), about three-fourths of projects collected data consistently. Most collected participants’ reactions, their opinions about the training, and information about PD content [6-9]. Records from 2009 and 2010 show that half to three-fourths of the projects gathered self-assessment information from participants about their learning [7-8]. In those same years, less than a quarter used exams (instructor or externally prepared) to assess participant achievements.
- After the PD, in three of the four years, a majority reported following up with participant teachers to assess implementation following the completion of instruction [6, 7, 9]. About 40 percent assessed student impacts; in no year did that reach a majority [6-9].
These reports also indicate that most PD assessment efforts engage teachers as respondents rather than as collaborators who conduct assessments in their school situations. The result is that most post-PD assessments are conducted by an external evaluator, who typically gathers only cursory data from participant teachers about their instruction or student impacts coupled with the PD The post-PD context helps to explain why. First, dependence solely on an external evaluator imposes substantial costs, legal questions, and logistical difficulties when data are collected from classrooms. For example, an external evaluator cannot personally gather data from students without meeting substantial state and local school rules that protect students. Even if the school-based rules can be met, the time required and associated costs for collecting these data pose significant hurdles for most evaluations. Second, participant teachers are likely to teach the PD content at different times, to various extents, and in other types of classes, making it difficult to time data gathering or ask a common set of questions.
Methods
Assessment and evaluation can serve both formative and summative purposes; they come with associated resource and opportunity costs. For PD efforts to gain the most value from them, they need to work hand in hand with data from formative and summative assessments, providing information for formative and summative evaluation and supporting learning for the PD team, participating teachers, and their students. Thus, we combined assessment and evaluation literature and practice to develop our survey questions.
We used a logic model to map backward the data required to meet the ATE guidelines, the timing and process of consent, and data collection for formative and summative use. This process highlighted that participant teachers would have significant advantages if they were considered part of the team for data collection. They can gather classroom data cost-effectively (e.g., eliminate travel costs) and readily meet legal challenges. Most importantly, based on the literature, proper assessment can serve both PD evaluations, their own teaching needs, and increase student achievement.
The literature on classroom assessment supported our thinking about assessment in PD, as evaluation of PD for educators has characteristics common to evaluation practices in regular classroom instruction. Assessment is the systematic gathering and use of information for summative evaluation decisions (grades) and as feedback between teachers and students (formative). Grades are most commonly associated with assessment practices. “Summative assessment is a form of appraisal that occurs at the end of an instructional unit or at a specific point in time, such as the end of the school year. It evaluates mastery of learning and offers information on what students know and do not know” [10]. Summative assessments compare poorly with formative assessments regarding their effects on student learning [11]. With an overall effect size of 0.05, summative assessment does little to support student learning [12] directly; it is sufficient to move a class average from the 50th percentile to the 52nd percentile.
Formative assessment can be used throughout the learning process to frame the direction instruction takes, serve course corrections, increase student engagement and interest, and empower students’ personal and corporate evaluative thinking that serves their learning. Formative assessment can be as simple as structured question interactions with students. Black and Wiliam’s synthesis of assessment practices [13], demonstrated the powerful effects of formative assessment practices as instruction tools and precursors for summative evaluation. Their work prompted the terminology “assessment for learning” that is widely used today to describe “. . . any assessment for which the first priority is to serve the purpose of promoting students’ learning” [14]. Additional research showed assessments to be extraordinarily powerful when used for learning purposes, both in increasing learning and establishing the effects of instruction through summative evaluation efforts [15]. Hattie’s work [15] showed that feedback, a key component of assessment for learning, strongly influenced learning effects. His findings across 1,287 studies showed an average effect size of 0.73. That effect is sufficient to move a class average from the 50th to the 76th percentile—in standard U.S. grading scales, that moves the average from fail to a C+.
Clearly, if ATE PD is to deliver the strongest possible learning impacts, formative assessments need to be incorporated in the PD; teachers need to learn how to incorporate those same formative assessment principles in their home classrooms. Data collected from those efforts can, in turn, serve the program evaluation of the PD effort. The habit of exchanging data about learning throughout the PD process, from workshop activities to home classrooms, will strengthen both outcomes and evidence.
As demonstrated by the previous ATE survey data, most PD data collection focuses directly on PD instruction and what is learned and used from that instruction. We designed our survey questions to (a) re-investigate those questions and (b) explore formative assessment practices, including how teachers were prepared for instruction and assessment in their home classrooms.
To address our first purpose, confirmation of previous findings about PD evaluations, this study included questions regarding when assessments occur in PD programs, the purposes served by the evaluations, and the proportion of projects engaged in the assessment activities. To address the second purpose, we sought information about teachers’ involvement throughout the PD process: planning for assessment and evaluation, what information was gathered, and the purposes for collecting data in the PD workshop and post-workshop classroom situations. We did not focus on P.I., PD instructor, or evaluator perceived roles in the assessment processes.
Our team partnered with EvaluATE to include items in a special topics section of the 2022 ATE PI survey administered by EvaluATE [16]. This section presented seven questions, each with a set of checkbox response options:
- Why did your project conduct an assessment of your professional development activities in 2021?
- For professional development (PD) work your project completed in 2021, which of the following types of information did you collect prior to the PD activity?
- At the time participants were recruited, did they agree to provide follow-up feedback on the impact of the professional development on any of the following topics?
- For professional development (PD) work your project completed in 2021, which of the following types of information did you collect during the PD activity?
- Indicate whether the following participant-based actions were part of your project’s professional development activities.
- For professional development (PD) work your project completed in 2021, which of the following types of information did you collect at the conclusion of the PD activity?
- For professional development (PD) work your project completed in 2021, which of the following types of information did you collect after the PD activity and after participants applied the workshop materials in their home classrooms?
As the above seven questions show, the first item asked respondents to identify the purposes served by their assessment practices during the PD work. The remaining six items sought to determine the nature of data gathered across the lifetime of their PD program:
- Before the PD instruction, PD participants were invited to attend when the PD program was being prepared. (Questions 2 and 3)
- During the implementation of the PD instruction. (Questions 4 and 5)
- At the close of the PD instruction program. (Question 6)
- After completion of PD instruction when, PD participants returned to their institutions to teach PD-related materials and skills. (Question 7)
The questions focused on the data and practices at each point that would provide evidence for claims about whether changes/increases that had occurred could be determined based on the nature of assessments conducted in the respective PD components.
The survey was administered by EvaluATE under the direction of Lyssa Wilson Becho at Western Michigan University in 2022. The survey was open for completion from February 22, 2022, through April 15, 2022, with a total response of 364 out of 396 active grants, a 92 percent response rate. When the survey closed, a member of the EvaluATE staff cleaned the data to remove potential duplicate responses and investigate outliers and missing responses. Data from the special topics section and project characteristics were then shared with FAS4ATE2 for analysis.
The main survey identified 145 projects that conducted PD as part of their work in 2021. Of those respondents, 70 stated that they conducted assessments as part of their PD efforts. Those 70 projects served as the sample for this study. Because the survey was conducted as a census, data for the seven items were treated as population measures and analyzed via SPSS using descriptive statistics (i.e., frequencies and crosstabs) with no inferential tests.
Results and Discussion
We present the results here as they would unfold in a typical PD, examining activities conducted and information collected before, during, at the end of, and after the PD We also include an analysis of data related to the key factors related to high-quality PD across those four points in time. A discussion of these results and their implications is included in the following section. The relevant tables and figures are included in the supplemental file available through the J-ATE website.
For PD to be successful, it must point toward intended outcomes in the classrooms of participant teachers. The expectations are that these teachers’ students’ knowledge, interest, and achievements in the PD-affected courses are improved. Furthermore, these students enter and strengthen the technician workforce. That success requires direct learning by teachers in the PD activity. Importantly, teachers must prepare during the PD activity for their following teaching efforts. This preparation includes development and assurances that these teachers can and will teach what they have learned, effectively use assessments to guide their instruction and show its effects on the students. The P.I. and PD instructor[s] are responsible for working with participant teachers to achieve these outcomes. Yet, our findings show that most PD evaluation and assessment efforts focus on participants’ direct learning.
In addition, the findings below discuss only the 70 projects who reported doing the assessments. Seventy-five projects delivering PD did not report they conducted any assessment. Thus, the percentages reported here can be roughly halved for an overall sense of assessment practices in ATE PD projects.
Stage 1: Prior to the PD activity
A majority (55%) of special section respondents indicated that a purpose for using assessment before the PD was to confirm the alignment of participant learning needs with planned activities. That purpose is confirmed by the majority response (66%), that prior to the PD activity, they collected information about participants’ knowledge or skills in the professional development focus area. A minority (26%) assessed participant’s valuation of ideas, materials, and techniques for use in their home institutions. Slightly more, 36 percent, responded that they collected information about participants’ intent to implement these things. The smallest percent (20%) indicated that they collected information about contextual factors that could influence participants’ use of the PD
Most P.I.s also reported that when recruited, the participants agreed to provide feedback on the impact of the PD on their learning (54%) and on their incorporation and implementation of what they learned (59%). As noted below, fewer P.I.s followed up to gather such data after the PD instruction.
Stage 2: During the PD activity
None of the response options was checked by a majority. Few reported they assessed to learn participant characteristics or address their assessment practices. Those findings are consistent with responses regarding the purposes for which assessments were used. When asked why they used assessments, a minority indicated they assessed: (a) to guide participant learning (29%) or (b) to improve teaching and learning (36%). A comparable number (36%) indicated they gathered information about participants’ learning/achievement related to the PD Half reported gathering participants’ opinions about the PD (50%). Fewer reported assessing participants’ valuation of the PD’s ideas, materials, or techniques for use in their home institutions (44%), or their intent to implement the PD in their home classrooms (43%). Correctly implementing the ideas, materials, or techniques is a critical step in guiding learning; only 12 percent reported gathering such information.
A near majority (47%) stated that their purpose for using assessments in the PD was to understand how participants incorporated and implemented what they learned in their home classrooms. A much smaller proportion indicated they followed through to prepare participants for such data collection. Thirty percent indicated their participants received instruction on student assessment. Similarly, 31 percent reported that participants were taught how to use assessment tools. Fewer P.I.s (26%) indicated that they provided assessment tools to their participants to use locally with their students. About the same proportion, 23 percent, reported that participants were given time to try these tools. Fewer yet provided information to help these participants report information back to the project from their classroom situations. Approximately one-sixth provided participants with reporting tools (17%), taught them how to use the tools (16%), or gave them practice with such tools (16%).
Stage 3: At the conclusion of the PD activity
Respondents reported most assessments happening near or at the close of PD instruction. A large majority of P.I.s (84%) who conducted these assessments reported they did so to assess outcomes and impact of professional development activities. In most cases, more than two-thirds of P.I.s using assessment reported collecting relevant information after PD instruction:
- Participants’ opinions about the PD (87%)
- Information about participants’ intent to implement new ideas, materials, or techniques from the PD in their home classrooms (70%)
- Information about participants’ learning/achievement related to the PD (69%)
- Information about participants’ valuation of new ideas, materials, or techniques for use in their home institutions (64%)
Only one response option in this category garnered less than a majority. That option, “Information about any contextual factors that could influence participants’ use of PD information,” was reported by 26 percent of those using assessments.
Stage 4: After the PD activity and after participants applied the workshop materials in their home classrooms.
None of this category’s responses reached 50 percent. About a third of respondents (34%) indicated they collected data about participants’ intentions to implement new ideas, materials, or techniques from the PD in their home classrooms. Half that number, one-sixth (17%), stated they gathered information about whether participants correctly implemented the ideas, materials, or techniques from the PD Similar proportions of respondents asked about the PD’s impact on student interest (20%) or student achievement (19%). In sum, data collection was not a substantial characteristic at this final stage of PD work.
Cross stage factors
Across the lifetime of a PD effort, the team established four factors as key to implementing PD learning and materials in local settings:
- Participants’ learning/achievement related to the PD,
- participants’ valuation of new ideas, materials, or techniques for use in their home institutions,
- participants’ intent to implement new ideas, materials, or techniques in their home classrooms,
- contextual factors that could influence participants’ use of professional development [17].
Data collection practices varied greatly across the four factors and the PD components in which data were gathered. The greatest attention to data gathering occurred at the close of the PD activity when the majority of those conducting assessments gathered information on the first three factors. Just more than a quarter of projects reported collecting data on the fourth factor at that time. In fact, the contextual matters are addressed by substantially fewer projects at all points in the PD process.
Similarly, a third or less of those conducting assessments address any of the four factors after the conclusion of the PD activity. Those small percentages are telegraphed by the small percentage of projects that assess these matters during the PD activity. In sum, those using assessment attend to data gathering primarily at the close of the PD activity, with far fewer attending to it at other points in the PD program. Clearly, data collection lags most in matters of context and post-PD assessment.
The previous ATE survey findings were generally based on evaluation questions, with no specific mention of assessment. Our study treated assessment as a subset of evaluation activities and asked P.I.s to respond only if they included assessment in their PD We defined assessment in the survey as “i) defining learning goals, ii) selecting and designing assessment tasks, iii) collecting, analyzing, interpreting, and using information to increase learning, development, and achievement.” Only half of the P.I.s whose projects delivered PD responded that they did any assessment at all. The amount of assessment reported was typically half as much as the evaluation reported in the previous surveys. Our data supports the historical findings, which show only a minority of PD efforts meet one or more of ATE’s five evaluation expectations. The pattern of the data also matched that of the reported evaluation practices: a little before the PD, almost none during, a lot at the end of PD delivery, and a little after. Most of the assessment effort happens at the end of the PD learning activity, and it is summative. Participant teachers are respondents in evaluative activities; the attempt to get classroom-level learning data is likely not feasible. Very few P.I.s reported any attention to the cross-stage factors, particularly contextual factors, that would influence participant teachers’ implementation of their learning. Clearly, most PD providers treat the end of the PD workshop instruction as the conclusion of their PD work.
As Figure 1 above demonstrates, NSF and ATE’s aim with this kind of PD effort is to pass along as much benefit as possible from from the PD providers through to the classroom students of the PD participants, who will then carry it on into the workforce. Therefore, PD efforts need to do whatever they can to help participant teachers learn and be prepared to help their students learn. The current situation means that ATE is missing out on two influential contributions that build on each other: formative assessment and the multiplier effect. While the majority of P.I.s reported their PD efforts address some formative assessment prior to workshop instruction, only a minority applied it during the period of instruction and after the close of the PD workshop. The focus on summative aspects of assessment means participant teachers do not get the benefit of formative assessment for their own learning and thus are not prepared for that aspect of their role in conveying the PD instruction in ways that serve their own students’ needs. The follow-on effect is that teachers may be less able to deliver the PD learning to their students, reducing their teaching impact. The ATE evaluation expectations focus on participant teachers’ use in the classrooms and the effects on their students because this is where the multiplier effects occur. When the major effort in PD delivery ends with the PD instruction, and evaluation and assessment do the same, that multiplier effect is not realized and cannot be reported.
To maximise the multiplier effect in this situation requires four shifts.
The first is a mindset shift from teacher as deliverer of expert content to an understanding that only learners can learn [18]. To support learners in this endeavor, the teachers’ role shifts to facilitator of learning. To take up this role, teachers need to understand what students know and the impact of their teaching on their students – this applies to teachers and learners in PD and in participant teachers’ classrooms [19, 20]. Developing an exchange of information about how learners are learning is the heart of formative assessment.
The second shift is to integrate formative assessment into PD and preparation for participant teachers’ return to their classrooms. Like all learners, teachers need to understand how their learning is progressing throughout the PD They also need to practice engaging their students in assessment for learning practices. Orienting assessments at the end of PD instruction as formative rather than a final “grading” step can remove the pejorative aspects of such tests. This reorientation can strengthen teachers’ self-efficacy for teaching the material and facilitate collaborative interactions among teacher participants. Such interactions can carry forward in virtual conversations and support as the teachers return to their respective schools. Learning how to use formative assessment with their students can increase the intended multiplier effects of training the trainer PD. Their students become agents in their own learning, and by learning how to learn, they become the resilient, agile members of the workforce that ATE aims to create.
This signals the third shift: moving teachers from participants to collaborators in the PD and its evaluation. Participant teachers are uniquely able to provide feedback on their learning and their students’ learning to the PD team to help them improve. Their location in classrooms positions them to contribute to post-PD evaluations because they are charged with summative assessments of their students’ learning. If prepared and supported to be collaborators, they could provide summary information from their formative and summative assessments of students for the summative evaluation of the PD
Finally, the planning horizon of PD activity, including assessment and evaluation, needs to be broadened to begin before participants arrive at the training and continue after participants have implemented the training in their classrooms. The longer time period will allow PD deliverers and participants to prepare for learning at the PD, engage in formative assessment during the PD, share that learning after the PD, and document the results throughout.
Our theoretical model for PD, viewed in concert with findings from each of the four PD components, the literature and the ATE evaluation expectations, suggests several practical steps teams can take to implement these shifts in each stage of PD planning and delivery:
- Pre-workshop: P.I.s and evaluators work together to plan general strategies for including participant teachers.
- Pre- and during the workshop: P.I. works with PD instructors to bring them onboard for collaboration with participant teachers.
- Pre- and during the workshop: P.I.s engage with participants to reach agreements to include post-PD participant involvement as part of the PD process.
- During the workshop, P.I. and PD instructors work with participants at the beginning of PD to plan for participant-teacher involvement, including assessing contextual factors that might impede implementation.
- During the workshop: As part of PD instruction, the PD instructor engages participant teachers in the assessment process for instruction and outcome assessment purposes to prepare them to conduct these assessments in their classrooms.
- During and after the workshop: Evaluator and participant teachers collaborate on how to engage in post-PD assessment processes to meet ATE evaluation expectations.
These four shifts and six steps combine to provide a pathway to reorient PD toward greater use of formative assessment practices in ways that increase learning among participants and participants’ students and ultimately serve ATE’s interest in summative evaluations in post-PD classrooms.
Future research
The literature and our study findings identified four cross-stage factors that influence the impact of PD at the classroom level: participant teachers’ learning, valuation of the learning, intent to implement, and implementation context. as key to implementing sound PD Our work is descriptive only; therefore, further research would be needed to provide causality evidence regarding the deficiencies noted among ATE PD evaluations. Considering that more than half of the projects stated they did not employ assessments, we think this is a deficiency that warrants follow up. We strongly encourage ATE to fund studies that look closely at the effects of these four factors, emphasizing comparison between those engaging in various assessment practices and those not applying the practices. Among the four, the limited attention to contextual factors stands out as an especially important area to study to determine its effects on learning among participants and their students and enhance evaluation use. Our survey findings also highlighted some PD efforts that were engaged with formative assessment throughout the PD and gathered data about student impacts. There are some successful cases out there which would bear further investigation to document their processes and capture examples of good practice. Comparison with less successful efforts could provide additional lessons and help improve PD efforts in ATE and beyond.
Conclusion
Concerns about an evaluative follow-up to check outcomes in schools after PD instruction have been raised and discussed at least since the early 1970s. A compelling argument then and now has been that such evaluation efforts are too onerous and should not be imposed on the PD program, the P.I.s, or the teacher participants. That argument overlooks or ignores the wealth of evidence that applying formative assessment practices before, during, and after PD instruction substantially strengthens the PD education process. When formative assessment to serve learning is applied, PD is stronger, and learning is increased throughout the process from instruction of PD participants to teachers’ use in their classrooms and their own students’ learning. Summative evaluations are also strengthened because formative assessments point to and prepare for summative assessments of intended outcomes. Formative and summative assessments then provide robust data for evaluation.
PD is a highly funded activity encompassing substantial outcome expectations that too often are not substantiated with sound evaluation. As we suggest above, the significant challenges of delivering and evaluating PD effectively can be addressed with structurally sound planning and actions that integrate evaluation and formative assessment throughout the life of a PD intervention. Changing any process that has been operating for many years is fraught with problems and will take substantial time, effort, and patience. Our work provides a pathway to produce that change; we believe the outcomes will be sufficiently rewarding to pay the price for change. The key will be shifting mindsets about learners, assessment, the role of teachers as collaborators, and the timeline of PD Then, many hands can make the work lighter.i
Acknowledgements. The authors are grateful to Laureate Professor John Hattie, who worked with Arlen and Amy on earlier versions of this project and related articles. This work was supported by the National Science Foundation under Grant No. 1853472. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Disclosures. The authors declare no conflicts of interest.
See Supplement 1 for supporting content.
[1] D. DeChiaro, “China’s STEM workforce a challenge for U.S. policymakers.,” Roll Call (August 10, 2021).
[2] J. L. Graves, Jr, M. Kearney, G. Barabino, and S. Malcom, Inequality in Science and the Case for a New Agenda, Proceedings of the National Academy of Sciences (2022).
[3] S.C. Funnell and P.J. Rogers, Purposeful Program Theory: Effective Use of Theories of Change and Logic Models (Jossey-Bass, 2011).
[4] V.A. Marshall, E. Sturgis, L.W. Becho, L.A. Wingate, and A. Gullickson, ATE Annual Survey 2021 Report (The Evaluation Center, Western Michigan University, 2021).
[5] “Advanced Technological Education (ATE) Program Solicitation NSF 21-598,” (2021).
[6] A.R. Gullickson, C.L.S. Coryn, and C.E. Hanssen, ATE Indicators of Productivity: Six-Year Trends 2000-2005 (The Evaluation Center, Western Michigan University, 2005).
[7] L.A. Wingate and A.R. Gullickson, 2009 Survey Snapshot: ATE Project & Center Professional Development Evaluation Practices (EvaluATE, Western Michigan University, 2009), p. 1.
[8] L.A. Wingate and A.R. Gullickson, 2010 Survey Snapshot: ATE Project & Center Professional Development Evaluation Practices (EvaluATE, Western Michigan University, 2010), p. 1.
[9] C.D. Smith, L.A. Wingate, E. Perk, L.N. Wilson, and A.R. Gullickson, ATE Annual Survey 2015 Report (The Evaluation Center, Western Michigan University, 2015).
[10] J. States, R. Detrich, and R. Keyworth, Overview of Summative Assessment (The Wing Institute, Morningside Academy, 2018).
[11] Wing Institute at Morningside Academy (https://www.winginstitute.org/assessment-summative).
[12] S.S. Yeh, “The cost-effectiveness of five policies for improving student achievement,” American Journal of Evaluation 28, 416–436 (2007).
[13] P. Black and D. Wiliam, “Assessment and Classroom Learning,” Assessment in Education: Principles, Policy & Practice 5, 7–74 (1998).
[14] P. Black, C. Harrison, C. Lee, B. Marshall, and D. Wiliam, Assessment for Learning: Putting It into Practice. (Open University Press, McGraw Hill Education, 2003).
[15] J. Hattie, Visible Learning: A Synthesis of over 800 Meta-Analyses Relating to Achievement, 1st ed. (Routledge, 2009).
[16] ATE 2022 Survey Questions, https://atesurvey.evalu-ate.org/survey-resources/.
[17] T.R. Guskey, “Gauge impact with 5 levels of data,” Learning Forward, 37, 32–37 (2016).
[18] Nuthall, G. (2007) The Hidden Lives of Learners. NZCER Press, Wellington. – References – Scientific Research Publishing.
[19] Hattie, J. (2011). Visible learning for teachers. Routledge.
[20] Hattie, J. (2015). The applicability of Visible Learning to higher education. Scholarship of Teaching and Learning in Psychology, 1(1), 79–91.