What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us The rapid proliferation and superb capabilities of widely available LLMs has ignited intense debate within the educational sector. On one side they offer students a..."> What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us The rapid proliferation and superb capabilities of widely available LLMs has ignited intense debate within the educational sector. On one side they offer students a..." /> What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us The rapid proliferation and superb capabilities of widely available LLMs has ignited intense debate within the educational sector. On one side they offer students a..." />

ترقية الحساب

What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us

The rapid proliferation and superb capabilities of widely available LLMs has ignited intense debate within the educational sector. On one side they offer students a 24/7 tutor who is always available to help; but then of course students can use LLMs to cheat! I’ve seen both sides of the coin with my students; yes, even the bad side and even at the university level.

While the potential benefits and problems of LLMs in education are widely discussed, a critical need existed for robust, empirical evidence to guide the integration of these technologies in the classroom, curricula, and studies in general. Moving beyond anecdotal accounts and rather limited studies, a recent work titled “The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis” offers one of the most comprehensive quantitative assessments to date. The article, by Jin Wang and Wenxiang Fan from the Chinese Education Modernization Research Institute of Hangzhou Normal University, was published this month in the journal Humanities and Social Sciences Communications from the Nature Publishing group. It is as complex as detailed, so here I will delve into the findings reported in it, touching also on the methodology and delving into the implications for those developing and deploying AI in educational contexts.

Into it: Quantifying ChatGPT’s Impact on Student Learning

The study by Wang and Fan is a meta-analysis that synthesizes data from 51 research papers published between November 2022 and February 2025, examining the impact of ChatGPT on three crucial student outcomes: learning performance, learning perception, and higher-order thinking. For AI practitioners and data scientists, this meta-analysis provides a valuable, evidence-based lens through which to evaluate current LLM capabilities and inform the future development of Education technologies.

The primary research question sought to determine the overall effectiveness of ChatGPT across the three key educational outcomes. The meta-analysis yielded statistically significant and noteworthy results:

Regarding learning performance, data from 44 studies indicated a large positive impact attributable to ChatGPT usage. In fact it turned out that, on average, students integrating ChatGPT into their learning processes demonstrated significantly improved academic outcomes compared to control groups.

For learning perception, encompassing students’ attitudes, motivation, and engagement, analysis of 19 studies revealed a moderately but significant positive impact. This implies that ChatGPT can contribute to a more favorable learning experience from the student’s perspective, despite the a priori limitations and problems associated to a tool that students can use to cheat.

Similarly, the impact on higher-order thinking skills—such as critical analysis, problem-solving, and creativity—was also found to be moderately positive, based on 9 studies. It is good news then that ChatGPT can support the development of these crucial cognitive abilities, although its influence is clearly not as pronounced as on direct learning performance.

How Different Factors Affect Learning With ChatGPT

Beyond overall efficacy, Wang and Fan investigated how various study characteristics affected ChatGPT’s impact on learning. Let me summarize for you the core results.

First, there was a strong effect of the type of course. The largest effect was observed in courses that involved the development of skills and competencies, followed closely by STEMand related subjects, and then by language learning/academic writing.

The course’s learning model also played a critical role in modulating how much ChatGPT assisted students. Problem-based learning saw a particularly strong potentiation by ChatGPT, yielding a very large effect size. Personalized learning contexts also showed a large effect, while project-based learning demonstrated a smaller, though still positive, effect.

The duration of ChatGPT use was also an important modulator of ChatGPT’s effect on learning performance. Short durations in the order of a single week produced small effects, while extended use over 4–8 weeks had the strongest impact, which did not grow much more if the usage was extended even further. This suggests that sustained interaction and familiarity may be crucial for cultivating positive affective responses to LLM-assisted learning.

Interestingly, the students’ grade levels, the specific role played by ChatGPT in the activity, and the area of application did not affect learning performance significantly, in any of the analyzed studies.

Other factors, including grade level, type of course, learning model, the specific role adopted by ChatGPT, and the area of application, did not significantly moderate the impact on learning perception.

The study further showed that when ChatGPT functioned as an intelligent tutor, providing personalized guidance and feedback, its impact on fostering higher-order thinking was most pronounced.

Implications for the Development of AI-Based Educational Technologies

The findings from Wang & Fan’s meta-analysis carry substantial implications for the design, development, and strategic deployment of AI in educational settings:

First of all, regarding the strategic scaffolding for deeper cognition. The impact on the development of thinking skills was somewhat lower than on performance, which means that LLMs are not inherently cultivators of deep critical thought, even if they do have a positive global effect on learning. Therefore, AI-based educational tools should integrate explicit scaffolding mechanisms that foster the development of thinking processes, to guide students from knowledge acquisition towards higher-level analysis, synthesis, and evaluation in parallel to the AI system’s direct help.

Thus, the implementation of AI tools in education must be framed properly, and as we saw above this framing will depend on the exact type and content of the course, the learning model one wishes to apply, and the available time. One particularly interesting setup would be that where the AI tool supports inquiry, hypothesis testing, and collaborative problem-solving. Note though that the findings on optimal duration imply the need for onboarding strategies and adaptive engagement techniques to maximize impact and mitigate potential over-reliance.

The superior impact documented when ChatGPT functions as an intelligent tutor highlights a key direction for AI in education. Developing LLM-based systems that can provide adaptive feedback, pose diagnostic and reflective questions, and guide learners through complex cognitive tasks is paramount. This requires moving beyond simple Q&A capabilities towards more sophisticated conversational AI and pedagogical reasoning.

On top, there are a few non-minor issues to work on. While LLMs excel at information delivery and task assistance, enhancing their impact on affective domainsand advanced cognitive skills requires better interaction designs. Incorporating elements that foster student agency, provide meaningful feedback, and manage cognitive load effectively are crucial considerations.

Limitations and Where Future Research Should Go

The authors of the study prudently acknowledge some limitations, which also illuminate avenues for future research. Although the total sample size was the largest ever, it is still small, and very small for some specific questions. More research needs to be done, and a new meta-analysis will probably be required when more data becomes available. A difficult point, and this is my personal addition, is that as the technology progresses so fast, results might become obsolete very rapidly, unfortunately.

Another limitation in the studies analyzed in this paper is that they are largely biased toward college-level students, with very limited data on primary education.

Wang and Fan also discuss what AI, data science, and pedagogues should consider in future research. First, they should try to disaggregate effects based on specific LLM versions, a point that is critical because they evolve so fast. Second, they should study how students and teachers typically “prompt” the LLMs, and then investigate the impact of differential prompting on the final learning outcomes. Then, somehow they need to develop and evaluate adaptive scaffolding mechanisms embedded within LLM-based educational tools. Finally, and over a long term, we need to explore the effects of LLM integration on knowledge retention and the development of self-regulated learning skills.

Personally, I add at this point, I am of the opinion that studies need to dig more into how students use LLMs to cheat, not necessarily willingly but possibly also by seeking for shortcuts that lead them wrong or allow them to get out of the way but without really learning anything. And in this context, I think AI scientists are falling short in developing camouflaged systems for the detection of AI-generated texts, that they can use to rapidly and confidently tell if, for example, a homework was done with an LLM. Yes, there are some watermarking and similar systems out therebut I haven’t seem them deployed at large in ways that educators can easily utilize.

Conclusion: Towards an Evidence-Informed Integration of AI in Education

The meta-analysis I’ve covered here for you provides a critical, data-driven contribution to the discourse on AI in education. It confirms the substantial potential of LLMs, particularly ChatGPT in these studies, to enhance student learning performance and positively influence learning perception and higher-order thinking. However, the study also powerfully illustrates that the effectiveness of these tools is not uniform but is significantly moderated by contextual factors and the nature of their integration into the learning process.

For the AI and data science community, these findings serve as both an affirmation and a challenge. The affirmation lies in the demonstrated efficacy of LLM technology. The challenge resides in harnessing this potential through thoughtful, evidence-informed design that moves beyond generic applications towards sophisticated, adaptive, and pedagogically sound educational tools. The path forward requires a continued commitment to rigorous research and a nuanced understanding of the complex interplay between AI, pedagogy, and human learning.

References

Here is the paper by Wang and Fan:

The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Jin Wang & Wenxiang Fan Humanities and Social Sciences Communications volume 12, 621 If you liked this, check out my TDS profile.

The post What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us appeared first on Towards Data Science.
#what #most #detailed #peerreviewed #study
What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us
The rapid proliferation and superb capabilities of widely available LLMs has ignited intense debate within the educational sector. On one side they offer students a 24/7 tutor who is always available to help; but then of course students can use LLMs to cheat! I’ve seen both sides of the coin with my students; yes, even the bad side and even at the university level. While the potential benefits and problems of LLMs in education are widely discussed, a critical need existed for robust, empirical evidence to guide the integration of these technologies in the classroom, curricula, and studies in general. Moving beyond anecdotal accounts and rather limited studies, a recent work titled “The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis” offers one of the most comprehensive quantitative assessments to date. The article, by Jin Wang and Wenxiang Fan from the Chinese Education Modernization Research Institute of Hangzhou Normal University, was published this month in the journal Humanities and Social Sciences Communications from the Nature Publishing group. It is as complex as detailed, so here I will delve into the findings reported in it, touching also on the methodology and delving into the implications for those developing and deploying AI in educational contexts. Into it: Quantifying ChatGPT’s Impact on Student Learning The study by Wang and Fan is a meta-analysis that synthesizes data from 51 research papers published between November 2022 and February 2025, examining the impact of ChatGPT on three crucial student outcomes: learning performance, learning perception, and higher-order thinking. For AI practitioners and data scientists, this meta-analysis provides a valuable, evidence-based lens through which to evaluate current LLM capabilities and inform the future development of Education technologies. The primary research question sought to determine the overall effectiveness of ChatGPT across the three key educational outcomes. The meta-analysis yielded statistically significant and noteworthy results: Regarding learning performance, data from 44 studies indicated a large positive impact attributable to ChatGPT usage. In fact it turned out that, on average, students integrating ChatGPT into their learning processes demonstrated significantly improved academic outcomes compared to control groups. For learning perception, encompassing students’ attitudes, motivation, and engagement, analysis of 19 studies revealed a moderately but significant positive impact. This implies that ChatGPT can contribute to a more favorable learning experience from the student’s perspective, despite the a priori limitations and problems associated to a tool that students can use to cheat. Similarly, the impact on higher-order thinking skills—such as critical analysis, problem-solving, and creativity—was also found to be moderately positive, based on 9 studies. It is good news then that ChatGPT can support the development of these crucial cognitive abilities, although its influence is clearly not as pronounced as on direct learning performance. How Different Factors Affect Learning With ChatGPT Beyond overall efficacy, Wang and Fan investigated how various study characteristics affected ChatGPT’s impact on learning. Let me summarize for you the core results. First, there was a strong effect of the type of course. The largest effect was observed in courses that involved the development of skills and competencies, followed closely by STEMand related subjects, and then by language learning/academic writing. The course’s learning model also played a critical role in modulating how much ChatGPT assisted students. Problem-based learning saw a particularly strong potentiation by ChatGPT, yielding a very large effect size. Personalized learning contexts also showed a large effect, while project-based learning demonstrated a smaller, though still positive, effect. The duration of ChatGPT use was also an important modulator of ChatGPT’s effect on learning performance. Short durations in the order of a single week produced small effects, while extended use over 4–8 weeks had the strongest impact, which did not grow much more if the usage was extended even further. This suggests that sustained interaction and familiarity may be crucial for cultivating positive affective responses to LLM-assisted learning. Interestingly, the students’ grade levels, the specific role played by ChatGPT in the activity, and the area of application did not affect learning performance significantly, in any of the analyzed studies. Other factors, including grade level, type of course, learning model, the specific role adopted by ChatGPT, and the area of application, did not significantly moderate the impact on learning perception. The study further showed that when ChatGPT functioned as an intelligent tutor, providing personalized guidance and feedback, its impact on fostering higher-order thinking was most pronounced. Implications for the Development of AI-Based Educational Technologies The findings from Wang & Fan’s meta-analysis carry substantial implications for the design, development, and strategic deployment of AI in educational settings: First of all, regarding the strategic scaffolding for deeper cognition. The impact on the development of thinking skills was somewhat lower than on performance, which means that LLMs are not inherently cultivators of deep critical thought, even if they do have a positive global effect on learning. Therefore, AI-based educational tools should integrate explicit scaffolding mechanisms that foster the development of thinking processes, to guide students from knowledge acquisition towards higher-level analysis, synthesis, and evaluation in parallel to the AI system’s direct help. Thus, the implementation of AI tools in education must be framed properly, and as we saw above this framing will depend on the exact type and content of the course, the learning model one wishes to apply, and the available time. One particularly interesting setup would be that where the AI tool supports inquiry, hypothesis testing, and collaborative problem-solving. Note though that the findings on optimal duration imply the need for onboarding strategies and adaptive engagement techniques to maximize impact and mitigate potential over-reliance. The superior impact documented when ChatGPT functions as an intelligent tutor highlights a key direction for AI in education. Developing LLM-based systems that can provide adaptive feedback, pose diagnostic and reflective questions, and guide learners through complex cognitive tasks is paramount. This requires moving beyond simple Q&A capabilities towards more sophisticated conversational AI and pedagogical reasoning. On top, there are a few non-minor issues to work on. While LLMs excel at information delivery and task assistance, enhancing their impact on affective domainsand advanced cognitive skills requires better interaction designs. Incorporating elements that foster student agency, provide meaningful feedback, and manage cognitive load effectively are crucial considerations. Limitations and Where Future Research Should Go The authors of the study prudently acknowledge some limitations, which also illuminate avenues for future research. Although the total sample size was the largest ever, it is still small, and very small for some specific questions. More research needs to be done, and a new meta-analysis will probably be required when more data becomes available. A difficult point, and this is my personal addition, is that as the technology progresses so fast, results might become obsolete very rapidly, unfortunately. Another limitation in the studies analyzed in this paper is that they are largely biased toward college-level students, with very limited data on primary education. Wang and Fan also discuss what AI, data science, and pedagogues should consider in future research. First, they should try to disaggregate effects based on specific LLM versions, a point that is critical because they evolve so fast. Second, they should study how students and teachers typically “prompt” the LLMs, and then investigate the impact of differential prompting on the final learning outcomes. Then, somehow they need to develop and evaluate adaptive scaffolding mechanisms embedded within LLM-based educational tools. Finally, and over a long term, we need to explore the effects of LLM integration on knowledge retention and the development of self-regulated learning skills. Personally, I add at this point, I am of the opinion that studies need to dig more into how students use LLMs to cheat, not necessarily willingly but possibly also by seeking for shortcuts that lead them wrong or allow them to get out of the way but without really learning anything. And in this context, I think AI scientists are falling short in developing camouflaged systems for the detection of AI-generated texts, that they can use to rapidly and confidently tell if, for example, a homework was done with an LLM. Yes, there are some watermarking and similar systems out therebut I haven’t seem them deployed at large in ways that educators can easily utilize. Conclusion: Towards an Evidence-Informed Integration of AI in Education The meta-analysis I’ve covered here for you provides a critical, data-driven contribution to the discourse on AI in education. It confirms the substantial potential of LLMs, particularly ChatGPT in these studies, to enhance student learning performance and positively influence learning perception and higher-order thinking. However, the study also powerfully illustrates that the effectiveness of these tools is not uniform but is significantly moderated by contextual factors and the nature of their integration into the learning process. For the AI and data science community, these findings serve as both an affirmation and a challenge. The affirmation lies in the demonstrated efficacy of LLM technology. The challenge resides in harnessing this potential through thoughtful, evidence-informed design that moves beyond generic applications towards sophisticated, adaptive, and pedagogically sound educational tools. The path forward requires a continued commitment to rigorous research and a nuanced understanding of the complex interplay between AI, pedagogy, and human learning. References Here is the paper by Wang and Fan: The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Jin Wang & Wenxiang Fan Humanities and Social Sciences Communications volume 12, 621 If you liked this, check out my TDS profile. The post What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us appeared first on Towards Data Science. #what #most #detailed #peerreviewed #study
TOWARDSDATASCIENCE.COM
What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us
The rapid proliferation and superb capabilities of widely available LLMs has ignited intense debate within the educational sector. On one side they offer students a 24/7 tutor who is always available to help; but then of course students can use LLMs to cheat! I’ve seen both sides of the coin with my students; yes, even the bad side and even at the university level. While the potential benefits and problems of LLMs in education are widely discussed, a critical need existed for robust, empirical evidence to guide the integration of these technologies in the classroom, curricula, and studies in general. Moving beyond anecdotal accounts and rather limited studies, a recent work titled “The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis” offers one of the most comprehensive quantitative assessments to date. The article, by Jin Wang and Wenxiang Fan from the Chinese Education Modernization Research Institute of Hangzhou Normal University, was published this month in the journal Humanities and Social Sciences Communications from the Nature Publishing group. It is as complex as detailed, so here I will delve into the findings reported in it, touching also on the methodology and delving into the implications for those developing and deploying AI in educational contexts. Into it: Quantifying ChatGPT’s Impact on Student Learning The study by Wang and Fan is a meta-analysis that synthesizes data from 51 research papers published between November 2022 and February 2025, examining the impact of ChatGPT on three crucial student outcomes: learning performance, learning perception, and higher-order thinking. For AI practitioners and data scientists, this meta-analysis provides a valuable, evidence-based lens through which to evaluate current LLM capabilities and inform the future development of Education technologies. The primary research question sought to determine the overall effectiveness of ChatGPT across the three key educational outcomes. The meta-analysis yielded statistically significant and noteworthy results: Regarding learning performance, data from 44 studies indicated a large positive impact attributable to ChatGPT usage. In fact it turned out that, on average, students integrating ChatGPT into their learning processes demonstrated significantly improved academic outcomes compared to control groups. For learning perception, encompassing students’ attitudes, motivation, and engagement, analysis of 19 studies revealed a moderately but significant positive impact. This implies that ChatGPT can contribute to a more favorable learning experience from the student’s perspective, despite the a priori limitations and problems associated to a tool that students can use to cheat. Similarly, the impact on higher-order thinking skills—such as critical analysis, problem-solving, and creativity—was also found to be moderately positive, based on 9 studies. It is good news then that ChatGPT can support the development of these crucial cognitive abilities, although its influence is clearly not as pronounced as on direct learning performance. How Different Factors Affect Learning With ChatGPT Beyond overall efficacy, Wang and Fan investigated how various study characteristics affected ChatGPT’s impact on learning. Let me summarize for you the core results. First, there was a strong effect of the type of course. The largest effect was observed in courses that involved the development of skills and competencies, followed closely by STEM (science/Technology) and related subjects, and then by language learning/academic writing. The course’s learning model also played a critical role in modulating how much ChatGPT assisted students. Problem-based learning saw a particularly strong potentiation by ChatGPT, yielding a very large effect size. Personalized learning contexts also showed a large effect, while project-based learning demonstrated a smaller, though still positive, effect. The duration of ChatGPT use was also an important modulator of ChatGPT’s effect on learning performance. Short durations in the order of a single week produced small effects, while extended use over 4–8 weeks had the strongest impact, which did not grow much more if the usage was extended even further. This suggests that sustained interaction and familiarity may be crucial for cultivating positive affective responses to LLM-assisted learning. Interestingly, the students’ grade levels, the specific role played by ChatGPT in the activity, and the area of application did not affect learning performance significantly, in any of the analyzed studies. Other factors, including grade level, type of course, learning model, the specific role adopted by ChatGPT, and the area of application, did not significantly moderate the impact on learning perception. The study further showed that when ChatGPT functioned as an intelligent tutor, providing personalized guidance and feedback, its impact on fostering higher-order thinking was most pronounced. Implications for the Development of AI-Based Educational Technologies The findings from Wang & Fan’s meta-analysis carry substantial implications for the design, development, and strategic deployment of AI in educational settings: First of all, regarding the strategic scaffolding for deeper cognition. The impact on the development of thinking skills was somewhat lower than on performance, which means that LLMs are not inherently cultivators of deep critical thought, even if they do have a positive global effect on learning. Therefore, AI-based educational tools should integrate explicit scaffolding mechanisms that foster the development of thinking processes, to guide students from knowledge acquisition towards higher-level analysis, synthesis, and evaluation in parallel to the AI system’s direct help. Thus, the implementation of AI tools in education must be framed properly, and as we saw above this framing will depend on the exact type and content of the course, the learning model one wishes to apply, and the available time. One particularly interesting setup would be that where the AI tool supports inquiry, hypothesis testing, and collaborative problem-solving. Note though that the findings on optimal duration imply the need for onboarding strategies and adaptive engagement techniques to maximize impact and mitigate potential over-reliance. The superior impact documented when ChatGPT functions as an intelligent tutor highlights a key direction for AI in education. Developing LLM-based systems that can provide adaptive feedback, pose diagnostic and reflective questions, and guide learners through complex cognitive tasks is paramount. This requires moving beyond simple Q&A capabilities towards more sophisticated conversational AI and pedagogical reasoning. On top, there are a few non-minor issues to work on. While LLMs excel at information delivery and task assistance (leading to high performance gains), enhancing their impact on affective domains (perception) and advanced cognitive skills requires better interaction designs. Incorporating elements that foster student agency, provide meaningful feedback, and manage cognitive load effectively are crucial considerations. Limitations and Where Future Research Should Go The authors of the study prudently acknowledge some limitations, which also illuminate avenues for future research. Although the total sample size was the largest ever, it is still small, and very small for some specific questions. More research needs to be done, and a new meta-analysis will probably be required when more data becomes available. A difficult point, and this is my personal addition, is that as the technology progresses so fast, results might become obsolete very rapidly, unfortunately. Another limitation in the studies analyzed in this paper is that they are largely biased toward college-level students, with very limited data on primary education. Wang and Fan also discuss what AI, data science, and pedagogues should consider in future research. First, they should try to disaggregate effects based on specific LLM versions, a point that is critical because they evolve so fast. Second, they should study how students and teachers typically “prompt” the LLMs, and then investigate the impact of differential prompting on the final learning outcomes. Then, somehow they need to develop and evaluate adaptive scaffolding mechanisms embedded within LLM-based educational tools. Finally, and over a long term, we need to explore the effects of LLM integration on knowledge retention and the development of self-regulated learning skills. Personally, I add at this point, I am of the opinion that studies need to dig more into how students use LLMs to cheat, not necessarily willingly but possibly also by seeking for shortcuts that lead them wrong or allow them to get out of the way but without really learning anything. And in this context, I think AI scientists are falling short in developing camouflaged systems for the detection of AI-generated texts, that they can use to rapidly and confidently tell if, for example, a homework was done with an LLM. Yes, there are some watermarking and similar systems out there (which I will cover some day!) but I haven’t seem them deployed at large in ways that educators can easily utilize. Conclusion: Towards an Evidence-Informed Integration of AI in Education The meta-analysis I’ve covered here for you provides a critical, data-driven contribution to the discourse on AI in education. It confirms the substantial potential of LLMs, particularly ChatGPT in these studies, to enhance student learning performance and positively influence learning perception and higher-order thinking. However, the study also powerfully illustrates that the effectiveness of these tools is not uniform but is significantly moderated by contextual factors and the nature of their integration into the learning process. For the AI and data science community, these findings serve as both an affirmation and a challenge. The affirmation lies in the demonstrated efficacy of LLM technology. The challenge resides in harnessing this potential through thoughtful, evidence-informed design that moves beyond generic applications towards sophisticated, adaptive, and pedagogically sound educational tools. The path forward requires a continued commitment to rigorous research and a nuanced understanding of the complex interplay between AI, pedagogy, and human learning. References Here is the paper by Wang and Fan: The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Jin Wang & Wenxiang Fan Humanities and Social Sciences Communications volume 12, 621 (2025) If you liked this, check out my TDS profile. The post What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us appeared first on Towards Data Science.
·161 مشاهدة