Abstract
The nation is facing a critical shortage of semiconductor technicians, engineers, and scientists. A lack of student interest in semiconductor careers exacerbates this scarcity. This deficit is predicted to exceed 100,000 technician jobs by 2030, which threatens the progress of semiconductor manufacturing. There is an urgent need for innovative approaches to workforce training because traditional training methods alone are unable to meet the demand for skilled workers. VR simulations hold great potential for training, and AI capabilities can scaffold the learning process through large language model (LLM) powered tutors that provide personalized responses to students. However, such training typically requires multidisciplinary teams of programmers, 3D artists, and instructional designers, and complex software such as game engines. If such advanced training could be created by less skilled users using simpler software, then a vast amount of quality training would become available to help address the workforce challenges. Here, we investigate whether it is feasible for community college and K-12 students to create such VR simulations. We describe how the students in the author group designed and created an AI-powered VR simulation for a common semiconductor fabrication process and pilot-tested it to evaluate its effectiveness. Pilot test results show that students reported high levels of engagement, ease of use, and increased interest in and knowledge of semiconductor manufacturing. Linear regression analysis of the data revealed that engagement, ease of use, knowledge increase, and interest increase are significant predictors of the desire to use VR training in the future. These findings show that given well-designed software tools and expert mentoring, a team of community college and K-12 students can successfully produce engaging and effective VR training simulations. Our work suggests a nationally scalable approach to addressing the semiconductor workforce challenge while also training the next generation of VR simulation developers. To our knowledge, this is the first time K-12 and community college students have developed and evaluated VR simulations with high efficiency and engagement.
Keywords: VR simulation, semiconductor, training, simulation-based learning, artificial intelligence, simulation design
© 2025 under the terms of the J ATE Open Access Publishing Agreement
Introduction
The semiconductor industry faces a significant workforce challenge, as highlighted by the Semiconductor Industry Association (SIA). The SIA has reported a growing gap in the availability of skilled technicians, engineers, and scientists, which poses a major threat to the industry’s ability to meet national demands for semiconductor production [1]. With the onshoring of semiconductor manufacturing, more than 100,000 technician jobs are expected to remain unfilled by 2030. This shortfall could severely hinder progress in a critical industry that not only supports the modern digital economy but also plays a vital role in national security.
Addressing this workforce challenge requires new ideas and solutions for education and training. With all the rapid technological advancement, adopting the traditional methods (e-learning and in-person expert-led training) alone is insufficient for rapidly equipping a new generation of technicians with the necessary knowledge and skills. Neither are they sufficient for developing interest in the field amongst young students because of the shortage of fab facilities, expensive equipment, and skilled instructors. In education, e-learning [2] refers to using digital technologies and electronic media to facilitate learning and training. It allows learners to access educational content, interact with instructors, and collaborate with peers remotely through the internet, often using multimedia resources and online platforms. Although e-learning is low-cost and scalable, it lacks immersion, interactivity, and personalization [3] and thus has limited effectiveness. Expert-led training (classroom/guided instruction), on the other hand, is highly effective but not scalable due to high cost, lack of expert instructors, and access to facilities and equipment [4].
Desktop-based, 2D, and 3D simulation-based learning has emerged as a promising solution in many industries, including medicine [5], aviation [6], and manufacturing [7]. This approach allows learners to experience complex processes and real-world scenarios in a controlled, risk-free environment. Simulation-based learning, when combined with e-learning and hands-on experiences [8], has been proven to yield superior educational outcomes [9] while requiring fewer resources than traditional methods alone [10, 11]. Recent advances in virtual reality (VR) technology have demonstrated the potential to deliver even higher levels of engagement and learning [12]. At the same time, artificial intelligence (AI) tools, particularly large language models (LLMs), have evolved rapidly, offering human-like language capabilities. AI tools can be applied to create an AI tutor that can answer students’ questions and personalize their learning experience.
AI-powered immersive 3D VR simulations offer a promising way to enable learners to explore virtual environments in a realistic and engaging manner [13]. These simulations closely replicate real-world environments and tasks, creating an immersive experience that fosters a sense of presence. By allowing learners to become familiar with fab environments and processes in a virtual setting, VR simulations help them acquire practical skills before stepping into an actual manufacturing facility [14]. For example, learners can use these simulations to observe the impact of equipment malfunctions or process variations on fab performance, which develops critical thinking and problem-solving skills, essential competencies for professionals in the semiconductor industry. Technicians can explore processes like photolithography (Figure 2) through simulations, which has proven successful in topics like semiconductor physics education [15].
The primary research objective of this project is to investigate whether community college and K12 students can develop engaging and effective AI-powered semiconductor simulations for technician training and industry awareness. We aim to test the effectiveness of our simulation through pilot testing and user surveys. This paper presents our approach to developing and delivering a VR simulation for training in the semiconductor industry. Created by a team of community college and K12 students, this VR training simulation of a semiconductor fab enables learners to perform tasks in the virtual fab at their own pace, supported by an AI tutor to further enrich the learning experience.
Semiconductor Fabrication Facility and Equipment
Our VR simulation replicates the photolithography section of the Integrated Nanosystems Research Facility (INRF) at UC Irvine Calit2, a research class 1,000/10,000 fab [16]. The 9,600 square feet facility houses a wide range of equipment for deposition, lithography, etching, plasma-asher, diffusion, characterization, and back-end processing [17].
Expert technicians from UCI INRF gave our team a guided tour of the facility. The tour covered all the core semiconductor processes, including photolithography, deposition, etching, and other processes. The tours (Figure 1) were conducted on weekends over a period of 6 months. During the tours, our team captured photos and videos of the physical layout and equipment. We also gathered technical documents containing standard operating procedures (SOPs) and other information relevant to creating an accurate VR simulation. These documents formed the basis of VR development.

We thoroughly learned the processes of photolithography and dry etching (via reactive ion etching). There are many methods of photolithographic patterning to transfer a desired pattern onto a semiconductor substrate, i.e., direct laser writing (DLW), ultraviolet lithography (UVL), deep UVL (DUV), extreme UVL (EUV), and X-ray lithography (XRL). We chose to model UVL (Figure 2b). UVL photolithography is the process where a light-sensitive photoresist is applied onto a silicon wafer (substrate) surface and exposed to UV light through a photomask to create precise patterns. After exposure to light, the wafer undergoes photoresist development, where selected areas of the photoresist are removed based on the pattern. The uncovered areas are then etched or deposited with materials to form the desired microchip structures. Such patterning, etching, and deposition processes are repeated many times based on the fabrication process flow and integrated circuitry design, which determines the pattern at each step [18].

Our target audience consisted of high school and community college students with little or no prior knowledge of semiconductor industry or equipment. The learning objectives of the intervention were to build awareness of and interest in the semiconductor industry and provide exposure to photolithography equipment and procedures.
Methods
Our goal was to deliver 3D VR simulations for photolithography procedures on Meta Quest VR headsets for maximum immersion. We also aimed to deliver the simulation on a web browser if any students had difficulty with VR headsets (VR sickness). We used the HyperSkill no-code authoring platform [19] to develop the simulation. The software we used was built using many technologies, including Unity game engine, Amazon Web Services, and OpenAI services, with the purpose of enabling the creation, delivery, and evaluation of VR/AR training simulations [20].
Our VR development process had three steps: design, development, and testing. The design process began with a review and analysis of documents collected during our facility tours. We analyzed the SOP documents, photos, and videos to create a storyboard to outline the VR simulation scenario to satisfy the learning objectives. The storyboard was then elaborated into an instructional design script that included step-by-step instructions to guide the learners in performing the photolithography task. Each instructional step included three components: a text prompt to the user screen, the user’s response to the instruction, and the system response to the user’s action. Once the script was ready, we created an asset list of 3D models required for representing the equipment, environment, tools, etc., in the VR simulation. Once the instructional design script and asset list were ready, the design step was complete, and simulation development began.
Phase 1: Development and Assets
The software has an asset library with over 100 generic objects and lab spaces resembling INRF spaces. However, a few semiconductor-specific 3D models, such as the spinner shown in Figure 3, were not found in the library and needed to be created using 3D modeling software and uploaded to the asset library. The student team included members who were familiar with Blender and were able to build the assets. Once all the assets in the asset list were available in the asset library, the authoring of the instructional design script started. Authoring in HyperSkill involved the use of three tabs: Scene Layout, Scenario Flow, and AI. The first part of authoring was done in the Scene Layout tab shown in Figure 4. On this tab, the 3D assets were dragged and dropped from the asset library to create the 3D layout of the simulation environment.


Phase 2: Storyboard and Scenario Flow
Once all the assets were laid out, the next step in authoring was to define the behavior of the simulation using the Scenario Flow tab shown in Figure 5. Each step in the script was represented as a state node. Each state node had transitions connecting it to other state notes. The user’s actions triggered transitions. For example, the step of turning on the spinner would be a state node, and the action of the user clicking on a button would be a transition to the next step. Once all the steps were created, the simulation was ready to be tested on the VR device or web browser.

AI Tutor
To provide personalized responses to learners, we included an AI-powered virtual tutor (chatbot) within the simulation. The tutor accepts voice and text inputs from the learners and provides voice and text-based responses. The AI tutor served to adapt the learning experience to each learner [21]. For example, learners could ask any questions and get accurate, personalized responses. Users unmuted their microphones and asked the AI a question, and they received an accurate response. The AI tutor was given a 3D visual representation, as shown in Figure 6 [19].

Cross Platform Delivery
The simulations were configured to be used on both VR and web. The web platform is key for expanding access as some students face difficulties when using it for a variety of reasons, including VR sickness and eyeglasses with large frames. Because the software supports many hardware platform deployments, simulations can be used on any device without changes. Supported devices include Meta Quest 2, Hololens, web, desktop (Windows, Mac), mobile (Android, iOS) and Apple Vision Pro.
Data Collection and Testing
The dataset consisted of survey questionnaire responses and log data gathered from the simulations. The software automatically collects the log data, and lists timestamped user actions and simulation events. The purpose of the survey questions was to assess system ease of use, user engagement, perceived knowledge gain, perceived interest increase, interest in immersive simulations, and frequency of video game use. We found validated measures of these and other similar constructs in the literature [12, 22, 23]. We were only allowed to include six questions in the survey and thus included only one question per construct (Table 1). All questions were rated on a 5-point Likert scale [24] (1-Strongly disagree and 5-Strongly agree), with the exception of question 6 (Table 1).
Testing of the VR simulation began with usability testing, aimed at refining the simulation iteratively before moving on to formal data collection. We conducted usability testing with five participants who were not involved in the project, allowing us to gather impartial feedback on the usability and functionality of the simulation. These participants were invited to engage with the VR environment while we observed their interactions, documented their feedback, and recorded any challenges they encountered. This phase was essential for assessing how intuitive and navigable the simulation was, especially for individuals with no prior exposure to semiconductor manufacturing or the specific technologies featured in the project.
Following the usability testing, a pilot study was conducted with 29 participants (23 students and six faculty and staff) at Pasadena City College (PCC) to further evaluate the simulation’s effectiveness. The pilot test focused on assessing the impact of the VR simulation on engagement and participants’ interest in pursuing semiconductor-related careers. Figure 7 illustrates the pilot testing at PCC. Participants completed the simulation experience and subsequently filled out a brief survey to provide feedback. Table 1 presents survey questions and corresponding response data. This data helped evaluate the overall user experience and gauge the simulation’s potential as a training tool in the semiconductor industry. Survey responses and further analysis are made available [25].

Results and Discussion
Survey results show that the VR simulation was well received by the users. As shown in Table 1, over 72% of students agreed or strongly agreed with questions 1-5. Users found the simulation to be easy to use and engaging, and reported that the simulation increased both their interest in and knowledge about the semiconductor industry.
Table 1. Survey response data (𝑁=29)
# | Question | Strongly disagree | Disagree | Neither | Agree | Strongly agree |
1 | I thought the system was easy to use. | 0.0% | 6.9% | 20.7% | 20.7% | 51.7% |
2 | I found the simulation to be engaging. | 3.4% | 3.4% | 10.3% | 17.2% | 65.5% |
3 | The simulation changed my knowledge of the semiconductor fabrication process. | 0.0% | 6.9% | 10.3% | 17.2% | 65.5% |
4 | The simulation changed my interest level in the semiconductor industry. | 3.4% | 6.9% | 17.2% | 13.8% | 58.6% |
5 | I would like to use more immersive simulations in the future. | 6.9% | 0.0% | 6. | 10.3% | 75.9% |
# | Question | Daily | Weekly | Monthly | Rarely | Never |
6 | How often do you play video games? | 31% | 10.3% | 10.3% | 17.2% | 31% |
Data Analysis
We began by examining the effect of prior video game use on future desire to use VR training. Figure 8 plots question 5 (I would like to use more immersive simulations in the future) versus question 6 (How often do you play video games) and shows that video game play frequency has no effect on the desire to use VR in the future, suggesting that VR simulations broadly appeal to users. This shows the potential for scalability to users who are not very familiar with virtual reality. Data analysis also revealed key predictors of future interest in VR simulation use, such as change in interest and change in knowledge.

Next, we analyzed the survey data to further investigate which specific aspects of the users’ experience predicted their desire to use more immersive simulations in the future. This analysis can reveal insights about how to allocate resources when designing immersive VR simulations.
We used linear regression analysis with question 5 response as the predicted variable and responses to questions 1-4 as predictor variables. Using one predictor variable at a time, we calculated the percentage of variability in the question 5 response explained by each of the predicted variables. Table 2 shows the results.
Table 2. Variability explained and the p-values
# | Question | % variability explained | p-value |
1 | I thought the system was easy to use. | 32.4% | 0.0007509 |
2 | I found the simulation to be engaging. | 64.3% | 0.00002364 |
3 | The simulation changed my knowledge of the semiconductor fabrication process. | 35.3% | 0.00002364 |
4 | The simulation changed my interest level in the semiconductor industry. | 39.2% | 0.00001619 |
Engagement explained 64.3% of the variability in question 5 response, followed by interest change (39.2%), knowledge change (35.3%), and ease of use (32.4%). Users can be expected to desire simulations that deliver new knowledge in a format that is easy to use, engaging and sparks interest. Engagement also emerged as the best predictor of the desire to use VR in the future, underscoring the importance of designing engaging simulations. Sometimes, educational simulation teams do not place adequate emphasis on designing for engagement. Note that the survey responses show high engagement even though we did not use any gamification design patterns [26]. The engagement is primarily the result of the high fidelity, immersion, and interactivity of the VR simulation, AI guidance, and embodied VR experience.
We conducted a similar linear regression analysis with question 4 (change in interest) response as the predicted variable and responses to questions 1-3 as predictor variables. Change in knowledge explained the highest percentage of variability (48.5%), which makes intuitive sense.
We also explored products of predictors to identify the best model for predicting the desire for future VR use (question 5). We found that the product of a change in interest, knowledge, and engagement explains 79.46% of the variability in question 5 responses (p-value of 0.000000319). The data provides strong evidence that the combination of increased interest, increased knowledge, and engagement is a significant predictor of the desire to use VR simulations in the future.
Summary of Pilot Test Results
Pilot testing aimed to evaluate the student-level outcomes of the use of VR simulations. We collected quantitative and qualitative data. Quantitative data included log files, and qualitative data was gathered from the survey questionnaire. We report two key findings. First, the workforce VR simulation was effective in increasing interest in and knowledge of the semiconductor industry. This finding suggests that VR simulations can be effective for other complex and technical subjects in other industries, such as advanced manufacturing. Second, regression analysis showed that ease of use, engagement, knowledge increase, and interest increase are significant predictors of students’ desire to use VR training in the future. This finding is useful because it provides quantitative guidance to VR simulation designers on prioritizing their efforts. For example, engagement should be given high priority. Furthermore, given that the gain of interest depends strongly on the gain of knowledge, VR simulation design must be based on a thorough analysis of the knowledge and skill domain.
Conclusion
In this study, we investigated whether teams of community college and K-12 students, supported with well-designed tools and proper guidance, can create effective VR simulations for workforce training in the semiconductor industry. Typically, such training requires multidisciplinary teams of programmers skilled in various technologies, 3D artists and instructional designers, and complex software such as game engines. However, if the required technical skill level could be reduced, a vast library of high-quality training could be created inexpensively to address workforce challenges. The study showed that our team of students successfully produced a high-quality workforce VR simulation and pilot-tested it to gather feedback on engagement and effectiveness. Our team gained experience building VR simulations and conducting research while the pilot participant students learned about VR simulation-based training and the semiconductor industry.
This work was presented at the California Institute of Technology (Caltech) in May 2024 and the TechConnect conference in June 2024. Since then, three other colleges have formally expressed interest in replicating this approach within their departments. This level of interest from other institutions suggests a potential pathway toward scaling this work nationally.
Limitations and Future Work
This study was limited in scope, as the pilot test involved only 23 students and six faculty and staff members. While these results provided initial insights, a larger and more diverse sample population would likely yield richer data and enable more robust generalizations. A broader demographic, representing a variety of academic backgrounds and experiences, would allow for more comprehensive conclusions about the effectiveness and adaptability of the training program.
In terms of future research, we propose several avenues to further develop and expand the scope of the AI tutor and the immersive training environment. First, personalization and adaptability are areas for enhancement. Future work could investigate the incorporation of adaptive learning technologies into the AI tutor. This would enable the tutor to dynamically adjust training content based on the learner’s progress, strengths, and weaknesses. Moreover, integrating didactic training modules alongside immersive experiences would provide a more holistic approach, ensuring foundational knowledge is reinforced before practical skills are applied. The inclusion of assessments in the form of quizzes would help identify areas of improvement. Studying how the AI tutor influences student engagement, motivation, and learning outcomes would provide insights into its efficacy and areas for improvement.
Another direction of future research could focus on adapting the VR simulation for use with mixed reality (MR) devices such as the Oculus Quest 3, Apple Vision Pro, and Microsoft HoloLens. Mixed reality simulations overlay virtual content onto the real-world environment, providing learners with the ability to interact with physical equipment and tools while simultaneously receiving virtual guidance and feedback. Adapting VR simulations to MR devices such as the HoloLens or Apple Vision Pro involves technical challenges due to differences in spatial mapping, interaction models, and device capabilities. VR environments are immersive and rely on virtual spaces, while MR devices require spatial mapping to anchor virtual objects within the real-world space. This process uses sensors, like depth-sensing cameras, to map the physical world so that virtual content aligns accurately with real objects. Additionally, VR interactions, typically driven by controllers, must be adjusted for MR, where users interact with both real and virtual elements through hand tracking, spatial gestures, and real-world positioning. Cross-platform tools can facilitate the transfer of basic content, but device-specific adjustments are needed, including recalibrating rendering approaches, integrating spatial sound, and optimizing user inputs to suit each platform’s unique hardware, such as the holographic technology in the HoloLens or the camera-based system in the Quest 3. This requires manual refinement for MR capability across different devices.
Additionally, the integration of computer vision and sensors could enhance the MR simulation, enabling it to evolve into a digital twin of the real-world environment [27]. This integration would allow for a bidirectional flow of data, enabling the simulation to respond dynamically to changes in the physical environment and offer real-time feedback to the user. However, it is important to note that achieving a true digital twin requires a robust, real-time interaction between the physical system and its digital counterpart. The integration of live sensor data allows for a unidirectional data flow, where the digital simulation reflects changes in the physical environment but does not actively control or influence the physical systems. Future work could focus on refining this interaction and developing systems that enable two-way data flow, ensuring a more accurate and responsive digital twin experience.
In this paper, we only covered photolithography simulations. Future work could focus on developing similar simulations for deposition, etching, packaging and other processes. We could also form collaborations with other institutions that have semiconductor fabs to create VR simulations to scale up a large training library and explore design strategies to further improve the VR simulations for knowledge and skill acquisition, retention and transfer.
Finally, future research should also focus on the analysis of log data captured from the simulations, which would provide further insights into the effectiveness of immersive training. By analyzing user interaction logs, we could identify patterns in how learners engage with the training material, where they encounter difficulties, and which aspects of the training lead to better learning outcomes. This data-driven approach could be used to further optimize the simulations and tailor them to individual learners’ needs, improving the overall impact of the training program.
Acknowledgments. The authors would like to extend their gratitude to Professor G.P. Li, Marc Palazzo, and C.Y. Lee at the University of California, Irvine, for their guidance and access to the INRF research labs. We also thank SimInsights Inc. for granting access to and their mentorship. A special thanks to Professor Jared Ashcroft at Pasadena City College for including our team in this project. Finally, we acknowledge the funding support from the National Science Foundation Advanced Technological Education (ATE) Micro Nano Technology Education Center.
Disclosures. The authors declare no conflicts of interest.
[1] SIA, “America faces significant shortage of tech workers in the semiconductor industry and throughout the U.S. economy.” Semiconductor Industry Association, Washington D.C, United States of America, 2023.
[2] D. R. Garrison, “E-learning in the 21st century: A framework for research and practice.”, 2nd ed. London, Routledge, 2011.
[3] Blezu, C., & Popa, E. M, “E-learning and its prospects in education.”, Presented at the 12th WSEAS International Conference on Computers, Heraklion, Greece, July 23-25 2008.
[4] D. Joyner. “Teaching at Scale: Improving Access, Outcomes, and Impact Through Digital Instruction”. London, Routledge. 2022.
[5] Lateef, F. (2010). Simulation-based learning: Just like the real thing. Journal of Emergencies, Trauma and Shock, 3(4), 348. https://doi.org/10.4103/0974-2700.70743
[6] F. Jentsch, & M. T Curtis,. “Simulation in aviation training.”, 1st ed. Taylor and Francis eBooks, 2017.
[7] Hosseinpour, F., & Hajihosseini, H. (2009). Importance of simulation in manufacturing. Academy of Science, Engineering and Technology, International Journal of Social, Behavioral, Educational, Economic, Business and Industrial Engineering, 3, 229-232.
[8] Cridlin, L. (2007). “The importance of hands-on learning”. Presented at the International Laser Safety Conference, San Francisco, California, 19-22 March, 207.
[9] M. Taher & A. Khan. “Comparison of simulation-based and hands-on teaching methodologies on students’ learning in an engineering technology program.” QScience Proceedings, 2015.
[10] C. Aldrich. “Simulations and the future of learning: An innovative (and perhaps revolutionary) approach to e-learning.” Educational Technology and Society, 2014.
[11] E. García Plaza et. al, “Virtual machining applied to the teaching of manufacturing technology.” Materials Science Forum, 692, 120–127, 2011.
[12] Srinivasa, A. et. al, “Virtual reality and its role in improving student knowledge, self-efficacy, and attitude in the materials testing laboratory. International Journal of Mechanical Engineering Education.” 2020.
[13] Gomez, L.I. (2020). “Immersive Virtual Reality for Learning Experiences.” In Burgos, D. (eds) Radical Solutions and eLearning. Lecture Notes in Educational Technology. Springer, Singapore.
[14] Aussama K. Nassar, Farris Al-Manaseer, Lisa M. Knowlton, Faiz Tuma,” Virtual reality (VR) as a simulation modality for technical skills acquisition” Annals of Medicine and Surgery, Volume 71, 2021.
[15] Bakhtibaeva, et al. “Use of information technology in teaching semiconductor physics.” Indian Journal of Science and Technology, 2016.
[16] UCI, Integrated Nanosystems Research Facility. (n.d.). INRF About.
University of California, Irvine. “INRF About” UCI, Integrated Nanosystems Research Facility.. August 2024. [Online]. Available: www.inrf.uci.edu/
[17] UCI, Integrated Nanosystems Research Facility. “INRF Cleanroom Usage Rates” 11_01_23.xls. uci.edu, 2023.
[18] I. Short “Photolithography.” Fundamentals of Microfabrication and Nanotechnology, Three-Volume Set, 2018
[19] Siminsights. “HyperSkill.” siminsights.com. Accessed Oct. 30, 2024 [Online]. Available: https://www.siminsights.com/hyperskill/
[20] Siminsights. “HyperSkill User’s Guide.” siminsights.com. Accessed Oct. 30, 2024 [Online]. Available: https://docs.siminsights.com/
[21] C. D. Luca et al., “2.0 AI in Semiconductor Industry,” 2021. https://www.semanticscholar.org/paper/2.0-AI-in-Semiconductor-Industry
[22] J.R. Lewis. The system usability scale: Past, present, and future. International Journal of Human–Computer Interaction, 2018.
[23] H. O’Brien et al, “A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form.” International Journal of Human-Computer Studies, 2018.
[24] A. Joshi et al, “ Likert scale: Explored and explained. “British Journal of Applied Science and Technology, 2015.
[25] G. Codina. “START Data Analysis.” Gabriel-Codina/START-Data-Analysis. github.com.
[26] C. Lewis. “Motivational design patterns.” [Online] UC Santa Cruz.Available: ark:/13030/m5bp06nc.
[27] H. Fan et al. “Enhancing metal additive manufacturing training with the advanced vision language model: A pathway to immersive augmented reality training for non-experts.” Journal of Manufacturing Systems, 2024.