• UT Austin’s Texas Interactive Institute spends semester in HTC Viverse

    UT Austin’s Texas Immersive Institute collaborated with HTC to bring its students work into the Viverse for one semester.Read More
    #austins #texas #interactive #institute #spends
    UT Austin’s Texas Interactive Institute spends semester in HTC Viverse
    UT Austin’s Texas Immersive Institute collaborated with HTC to bring its students work into the Viverse for one semester.Read More #austins #texas #interactive #institute #spends
    UT Austin’s Texas Interactive Institute spends semester in HTC Viverse
    venturebeat.com
    UT Austin’s Texas Immersive Institute collaborated with HTC to bring its students work into the Viverse for one semester.Read More
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Google’s Jules aims to out-code Codex in battle for the AI developer stack

    Google released Jules, its coding agent, into beta as autonomous coding agents are quickly gaining market share.Read More
    #googles #jules #aims #outcode #codex
    Google’s Jules aims to out-code Codex in battle for the AI developer stack
    Google released Jules, its coding agent, into beta as autonomous coding agents are quickly gaining market share.Read More #googles #jules #aims #outcode #codex
    Google’s Jules aims to out-code Codex in battle for the AI developer stack
    venturebeat.com
    Google released Jules, its coding agent, into beta as autonomous coding agents are quickly gaining market share.Read More
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Google’s Gemini AI is coming to Chrome

    Google is adding its Gemini AI assistant to Chrome, the company announced at Google I/O on Tuesday. Initially, Gemini will be able to “clarify complex information on any webpage you’re reading or summarize information,” according to a blog post from Google Labs and Gemini VP Josh Woodward. Google envisions that Gemini in Chrome will later “work across multiple tabs and navigate websites on your behalf.”I saw a demo during a briefing ahead of Tuesday’s announcement. In Chrome, you’ll see a little sparkle icon in the top right corner. Click that and a Gemini chatbot window will open — it’s a floating UI that you can move and resize. From there, you can ask questions about the website.In the demo, Charmaine D’Silva, a director of product management on the Chrome team, opened a page for a sleeping bag at REI and clicked on a suggested Gemini prompt to list the bag’s key features. Gemini read the entire page and listed a quick summary of the bag. D’Silva then asked if the sleeping bag was a good option for camping in Maine, and Gemini in Chrome responded by pulling information from the REI page and the web.After that, D’Silva went to a shopping page on another retailer’s website for a different sleeping bag and asked Gemini to compare the two sleeping bags. Gemini did that and included a comparison table.You’ll initially be able to keep a conversation going with Gemini as you navigate from tab to tab. But “later in the year,” Gemini in Chrome will let you select multiple tabs at once and ask a question about all of them.D’Silva also showed a demo of a feature that will be available in the future: using Gemini to navigate websites. In the demo, D’Silva pulled up Gemini Live in Chrome to help navigate a recipe site. D’Silva asked Gemini to scroll to the ingredients, and the AI zipped to that part of the page. It also responded when D’Silva asked for help converting the required amount of sugar from cups to grams.In Google’s selected demos, Gemini in Chrome seems like it could occasionally be useful, especially with comparison tables or in-the-moment ingredient conversions. I’d rather just read the website or do my own research instead of reading Gemini’s AI summaries, especially since AI can hallucinate incorrect information.Gemini in Chrome is launching on Wednesday. It will initially release on Windows and macOS in early access to users 18 or older who use English as their language. It will be available to people who subscribe to Google’s AI Pro and Ultra subscriptions or users of Chrome’s beta, canary, and dev channels, Parisa Tabriz, Google’s VP and GM of Chrome, said in the briefing.As for bringing Gemini to mobile Chrome, “it’s an area that we’ll think about,” Tabriz says, but right now, the company is “very focused on desktop.”Correction, May 20th: Gemini in Chrome can keep a conversation going as you move from tab to tab; it doesn’t only work across two tabs, as we initially reported.See More:
    #googles #gemini #coming #chrome
    Google’s Gemini AI is coming to Chrome
    Google is adding its Gemini AI assistant to Chrome, the company announced at Google I/O on Tuesday. Initially, Gemini will be able to “clarify complex information on any webpage you’re reading or summarize information,” according to a blog post from Google Labs and Gemini VP Josh Woodward. Google envisions that Gemini in Chrome will later “work across multiple tabs and navigate websites on your behalf.”I saw a demo during a briefing ahead of Tuesday’s announcement. In Chrome, you’ll see a little sparkle icon in the top right corner. Click that and a Gemini chatbot window will open — it’s a floating UI that you can move and resize. From there, you can ask questions about the website.In the demo, Charmaine D’Silva, a director of product management on the Chrome team, opened a page for a sleeping bag at REI and clicked on a suggested Gemini prompt to list the bag’s key features. Gemini read the entire page and listed a quick summary of the bag. D’Silva then asked if the sleeping bag was a good option for camping in Maine, and Gemini in Chrome responded by pulling information from the REI page and the web.After that, D’Silva went to a shopping page on another retailer’s website for a different sleeping bag and asked Gemini to compare the two sleeping bags. Gemini did that and included a comparison table.You’ll initially be able to keep a conversation going with Gemini as you navigate from tab to tab. But “later in the year,” Gemini in Chrome will let you select multiple tabs at once and ask a question about all of them.D’Silva also showed a demo of a feature that will be available in the future: using Gemini to navigate websites. In the demo, D’Silva pulled up Gemini Live in Chrome to help navigate a recipe site. D’Silva asked Gemini to scroll to the ingredients, and the AI zipped to that part of the page. It also responded when D’Silva asked for help converting the required amount of sugar from cups to grams.In Google’s selected demos, Gemini in Chrome seems like it could occasionally be useful, especially with comparison tables or in-the-moment ingredient conversions. I’d rather just read the website or do my own research instead of reading Gemini’s AI summaries, especially since AI can hallucinate incorrect information.Gemini in Chrome is launching on Wednesday. It will initially release on Windows and macOS in early access to users 18 or older who use English as their language. It will be available to people who subscribe to Google’s AI Pro and Ultra subscriptions or users of Chrome’s beta, canary, and dev channels, Parisa Tabriz, Google’s VP and GM of Chrome, said in the briefing.As for bringing Gemini to mobile Chrome, “it’s an area that we’ll think about,” Tabriz says, but right now, the company is “very focused on desktop.”Correction, May 20th: Gemini in Chrome can keep a conversation going as you move from tab to tab; it doesn’t only work across two tabs, as we initially reported.See More: #googles #gemini #coming #chrome
    Google’s Gemini AI is coming to Chrome
    www.theverge.com
    Google is adding its Gemini AI assistant to Chrome, the company announced at Google I/O on Tuesday. Initially, Gemini will be able to “clarify complex information on any webpage you’re reading or summarize information,” according to a blog post from Google Labs and Gemini VP Josh Woodward. Google envisions that Gemini in Chrome will later “work across multiple tabs and navigate websites on your behalf.”I saw a demo during a briefing ahead of Tuesday’s announcement. In Chrome, you’ll see a little sparkle icon in the top right corner. Click that and a Gemini chatbot window will open — it’s a floating UI that you can move and resize. From there, you can ask questions about the website.In the demo, Charmaine D’Silva, a director of product management on the Chrome team, opened a page for a sleeping bag at REI and clicked on a suggested Gemini prompt to list the bag’s key features. Gemini read the entire page and listed a quick summary of the bag. D’Silva then asked if the sleeping bag was a good option for camping in Maine, and Gemini in Chrome responded by pulling information from the REI page and the web.After that, D’Silva went to a shopping page on another retailer’s website for a different sleeping bag and asked Gemini to compare the two sleeping bags. Gemini did that and included a comparison table.You’ll initially be able to keep a conversation going with Gemini as you navigate from tab to tab. But “later in the year,” Gemini in Chrome will let you select multiple tabs at once and ask a question about all of them.D’Silva also showed a demo of a feature that will be available in the future: using Gemini to navigate websites. In the demo, D’Silva pulled up Gemini Live in Chrome to help navigate a recipe site. D’Silva asked Gemini to scroll to the ingredients, and the AI zipped to that part of the page. It also responded when D’Silva asked for help converting the required amount of sugar from cups to grams.In Google’s selected demos, Gemini in Chrome seems like it could occasionally be useful, especially with comparison tables or in-the-moment ingredient conversions. I’d rather just read the website or do my own research instead of reading Gemini’s AI summaries, especially since AI can hallucinate incorrect information.Gemini in Chrome is launching on Wednesday. It will initially release on Windows and macOS in early access to users 18 or older who use English as their language. It will be available to people who subscribe to Google’s AI Pro and Ultra subscriptions or users of Chrome’s beta, canary, and dev channels, Parisa Tabriz, Google’s VP and GM of Chrome, said in the briefing.As for bringing Gemini to mobile Chrome, “it’s an area that we’ll think about,” Tabriz says, but right now, the company is “very focused on desktop.”Correction, May 20th: Gemini in Chrome can keep a conversation going as you move from tab to tab; it doesn’t only work across two tabs, as we initially reported.See More:
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us

    The rapid proliferation and superb capabilities of widely available LLMs has ignited intense debate within the educational sector. On one side they offer students a 24/7 tutor who is always available to help; but then of course students can use LLMs to cheat! I’ve seen both sides of the coin with my students; yes, even the bad side and even at the university level.

    While the potential benefits and problems of LLMs in education are widely discussed, a critical need existed for robust, empirical evidence to guide the integration of these technologies in the classroom, curricula, and studies in general. Moving beyond anecdotal accounts and rather limited studies, a recent work titled “The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis” offers one of the most comprehensive quantitative assessments to date. The article, by Jin Wang and Wenxiang Fan from the Chinese Education Modernization Research Institute of Hangzhou Normal University, was published this month in the journal Humanities and Social Sciences Communications from the Nature Publishing group. It is as complex as detailed, so here I will delve into the findings reported in it, touching also on the methodology and delving into the implications for those developing and deploying AI in educational contexts.

    Into it: Quantifying ChatGPT’s Impact on Student Learning

    The study by Wang and Fan is a meta-analysis that synthesizes data from 51 research papers published between November 2022 and February 2025, examining the impact of ChatGPT on three crucial student outcomes: learning performance, learning perception, and higher-order thinking. For AI practitioners and data scientists, this meta-analysis provides a valuable, evidence-based lens through which to evaluate current LLM capabilities and inform the future development of Education technologies.

    The primary research question sought to determine the overall effectiveness of ChatGPT across the three key educational outcomes. The meta-analysis yielded statistically significant and noteworthy results:

    Regarding learning performance, data from 44 studies indicated a large positive impact attributable to ChatGPT usage. In fact it turned out that, on average, students integrating ChatGPT into their learning processes demonstrated significantly improved academic outcomes compared to control groups.

    For learning perception, encompassing students’ attitudes, motivation, and engagement, analysis of 19 studies revealed a moderately but significant positive impact. This implies that ChatGPT can contribute to a more favorable learning experience from the student’s perspective, despite the a priori limitations and problems associated to a tool that students can use to cheat.

    Similarly, the impact on higher-order thinking skills—such as critical analysis, problem-solving, and creativity—was also found to be moderately positive, based on 9 studies. It is good news then that ChatGPT can support the development of these crucial cognitive abilities, although its influence is clearly not as pronounced as on direct learning performance.

    How Different Factors Affect Learning With ChatGPT

    Beyond overall efficacy, Wang and Fan investigated how various study characteristics affected ChatGPT’s impact on learning. Let me summarize for you the core results.

    First, there was a strong effect of the type of course. The largest effect was observed in courses that involved the development of skills and competencies, followed closely by STEMand related subjects, and then by language learning/academic writing.

    The course’s learning model also played a critical role in modulating how much ChatGPT assisted students. Problem-based learning saw a particularly strong potentiation by ChatGPT, yielding a very large effect size. Personalized learning contexts also showed a large effect, while project-based learning demonstrated a smaller, though still positive, effect.

    The duration of ChatGPT use was also an important modulator of ChatGPT’s effect on learning performance. Short durations in the order of a single week produced small effects, while extended use over 4–8 weeks had the strongest impact, which did not grow much more if the usage was extended even further. This suggests that sustained interaction and familiarity may be crucial for cultivating positive affective responses to LLM-assisted learning.

    Interestingly, the students’ grade levels, the specific role played by ChatGPT in the activity, and the area of application did not affect learning performance significantly, in any of the analyzed studies.

    Other factors, including grade level, type of course, learning model, the specific role adopted by ChatGPT, and the area of application, did not significantly moderate the impact on learning perception.

    The study further showed that when ChatGPT functioned as an intelligent tutor, providing personalized guidance and feedback, its impact on fostering higher-order thinking was most pronounced.

    Implications for the Development of AI-Based Educational Technologies

    The findings from Wang & Fan’s meta-analysis carry substantial implications for the design, development, and strategic deployment of AI in educational settings:

    First of all, regarding the strategic scaffolding for deeper cognition. The impact on the development of thinking skills was somewhat lower than on performance, which means that LLMs are not inherently cultivators of deep critical thought, even if they do have a positive global effect on learning. Therefore, AI-based educational tools should integrate explicit scaffolding mechanisms that foster the development of thinking processes, to guide students from knowledge acquisition towards higher-level analysis, synthesis, and evaluation in parallel to the AI system’s direct help.

    Thus, the implementation of AI tools in education must be framed properly, and as we saw above this framing will depend on the exact type and content of the course, the learning model one wishes to apply, and the available time. One particularly interesting setup would be that where the AI tool supports inquiry, hypothesis testing, and collaborative problem-solving. Note though that the findings on optimal duration imply the need for onboarding strategies and adaptive engagement techniques to maximize impact and mitigate potential over-reliance.

    The superior impact documented when ChatGPT functions as an intelligent tutor highlights a key direction for AI in education. Developing LLM-based systems that can provide adaptive feedback, pose diagnostic and reflective questions, and guide learners through complex cognitive tasks is paramount. This requires moving beyond simple Q&A capabilities towards more sophisticated conversational AI and pedagogical reasoning.

    On top, there are a few non-minor issues to work on. While LLMs excel at information delivery and task assistance, enhancing their impact on affective domainsand advanced cognitive skills requires better interaction designs. Incorporating elements that foster student agency, provide meaningful feedback, and manage cognitive load effectively are crucial considerations.

    Limitations and Where Future Research Should Go

    The authors of the study prudently acknowledge some limitations, which also illuminate avenues for future research. Although the total sample size was the largest ever, it is still small, and very small for some specific questions. More research needs to be done, and a new meta-analysis will probably be required when more data becomes available. A difficult point, and this is my personal addition, is that as the technology progresses so fast, results might become obsolete very rapidly, unfortunately.

    Another limitation in the studies analyzed in this paper is that they are largely biased toward college-level students, with very limited data on primary education.

    Wang and Fan also discuss what AI, data science, and pedagogues should consider in future research. First, they should try to disaggregate effects based on specific LLM versions, a point that is critical because they evolve so fast. Second, they should study how students and teachers typically “prompt” the LLMs, and then investigate the impact of differential prompting on the final learning outcomes. Then, somehow they need to develop and evaluate adaptive scaffolding mechanisms embedded within LLM-based educational tools. Finally, and over a long term, we need to explore the effects of LLM integration on knowledge retention and the development of self-regulated learning skills.

    Personally, I add at this point, I am of the opinion that studies need to dig more into how students use LLMs to cheat, not necessarily willingly but possibly also by seeking for shortcuts that lead them wrong or allow them to get out of the way but without really learning anything. And in this context, I think AI scientists are falling short in developing camouflaged systems for the detection of AI-generated texts, that they can use to rapidly and confidently tell if, for example, a homework was done with an LLM. Yes, there are some watermarking and similar systems out therebut I haven’t seem them deployed at large in ways that educators can easily utilize.

    Conclusion: Towards an Evidence-Informed Integration of AI in Education

    The meta-analysis I’ve covered here for you provides a critical, data-driven contribution to the discourse on AI in education. It confirms the substantial potential of LLMs, particularly ChatGPT in these studies, to enhance student learning performance and positively influence learning perception and higher-order thinking. However, the study also powerfully illustrates that the effectiveness of these tools is not uniform but is significantly moderated by contextual factors and the nature of their integration into the learning process.

    For the AI and data science community, these findings serve as both an affirmation and a challenge. The affirmation lies in the demonstrated efficacy of LLM technology. The challenge resides in harnessing this potential through thoughtful, evidence-informed design that moves beyond generic applications towards sophisticated, adaptive, and pedagogically sound educational tools. The path forward requires a continued commitment to rigorous research and a nuanced understanding of the complex interplay between AI, pedagogy, and human learning.

    References

    Here is the paper by Wang and Fan:

    The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Jin Wang & Wenxiang Fan Humanities and Social Sciences Communications volume 12, 621 If you liked this, check out my TDS profile.

    The post What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us appeared first on Towards Data Science.
    #what #most #detailed #peerreviewed #study
    What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us
    The rapid proliferation and superb capabilities of widely available LLMs has ignited intense debate within the educational sector. On one side they offer students a 24/7 tutor who is always available to help; but then of course students can use LLMs to cheat! I’ve seen both sides of the coin with my students; yes, even the bad side and even at the university level. While the potential benefits and problems of LLMs in education are widely discussed, a critical need existed for robust, empirical evidence to guide the integration of these technologies in the classroom, curricula, and studies in general. Moving beyond anecdotal accounts and rather limited studies, a recent work titled “The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis” offers one of the most comprehensive quantitative assessments to date. The article, by Jin Wang and Wenxiang Fan from the Chinese Education Modernization Research Institute of Hangzhou Normal University, was published this month in the journal Humanities and Social Sciences Communications from the Nature Publishing group. It is as complex as detailed, so here I will delve into the findings reported in it, touching also on the methodology and delving into the implications for those developing and deploying AI in educational contexts. Into it: Quantifying ChatGPT’s Impact on Student Learning The study by Wang and Fan is a meta-analysis that synthesizes data from 51 research papers published between November 2022 and February 2025, examining the impact of ChatGPT on three crucial student outcomes: learning performance, learning perception, and higher-order thinking. For AI practitioners and data scientists, this meta-analysis provides a valuable, evidence-based lens through which to evaluate current LLM capabilities and inform the future development of Education technologies. The primary research question sought to determine the overall effectiveness of ChatGPT across the three key educational outcomes. The meta-analysis yielded statistically significant and noteworthy results: Regarding learning performance, data from 44 studies indicated a large positive impact attributable to ChatGPT usage. In fact it turned out that, on average, students integrating ChatGPT into their learning processes demonstrated significantly improved academic outcomes compared to control groups. For learning perception, encompassing students’ attitudes, motivation, and engagement, analysis of 19 studies revealed a moderately but significant positive impact. This implies that ChatGPT can contribute to a more favorable learning experience from the student’s perspective, despite the a priori limitations and problems associated to a tool that students can use to cheat. Similarly, the impact on higher-order thinking skills—such as critical analysis, problem-solving, and creativity—was also found to be moderately positive, based on 9 studies. It is good news then that ChatGPT can support the development of these crucial cognitive abilities, although its influence is clearly not as pronounced as on direct learning performance. How Different Factors Affect Learning With ChatGPT Beyond overall efficacy, Wang and Fan investigated how various study characteristics affected ChatGPT’s impact on learning. Let me summarize for you the core results. First, there was a strong effect of the type of course. The largest effect was observed in courses that involved the development of skills and competencies, followed closely by STEMand related subjects, and then by language learning/academic writing. The course’s learning model also played a critical role in modulating how much ChatGPT assisted students. Problem-based learning saw a particularly strong potentiation by ChatGPT, yielding a very large effect size. Personalized learning contexts also showed a large effect, while project-based learning demonstrated a smaller, though still positive, effect. The duration of ChatGPT use was also an important modulator of ChatGPT’s effect on learning performance. Short durations in the order of a single week produced small effects, while extended use over 4–8 weeks had the strongest impact, which did not grow much more if the usage was extended even further. This suggests that sustained interaction and familiarity may be crucial for cultivating positive affective responses to LLM-assisted learning. Interestingly, the students’ grade levels, the specific role played by ChatGPT in the activity, and the area of application did not affect learning performance significantly, in any of the analyzed studies. Other factors, including grade level, type of course, learning model, the specific role adopted by ChatGPT, and the area of application, did not significantly moderate the impact on learning perception. The study further showed that when ChatGPT functioned as an intelligent tutor, providing personalized guidance and feedback, its impact on fostering higher-order thinking was most pronounced. Implications for the Development of AI-Based Educational Technologies The findings from Wang & Fan’s meta-analysis carry substantial implications for the design, development, and strategic deployment of AI in educational settings: First of all, regarding the strategic scaffolding for deeper cognition. The impact on the development of thinking skills was somewhat lower than on performance, which means that LLMs are not inherently cultivators of deep critical thought, even if they do have a positive global effect on learning. Therefore, AI-based educational tools should integrate explicit scaffolding mechanisms that foster the development of thinking processes, to guide students from knowledge acquisition towards higher-level analysis, synthesis, and evaluation in parallel to the AI system’s direct help. Thus, the implementation of AI tools in education must be framed properly, and as we saw above this framing will depend on the exact type and content of the course, the learning model one wishes to apply, and the available time. One particularly interesting setup would be that where the AI tool supports inquiry, hypothesis testing, and collaborative problem-solving. Note though that the findings on optimal duration imply the need for onboarding strategies and adaptive engagement techniques to maximize impact and mitigate potential over-reliance. The superior impact documented when ChatGPT functions as an intelligent tutor highlights a key direction for AI in education. Developing LLM-based systems that can provide adaptive feedback, pose diagnostic and reflective questions, and guide learners through complex cognitive tasks is paramount. This requires moving beyond simple Q&A capabilities towards more sophisticated conversational AI and pedagogical reasoning. On top, there are a few non-minor issues to work on. While LLMs excel at information delivery and task assistance, enhancing their impact on affective domainsand advanced cognitive skills requires better interaction designs. Incorporating elements that foster student agency, provide meaningful feedback, and manage cognitive load effectively are crucial considerations. Limitations and Where Future Research Should Go The authors of the study prudently acknowledge some limitations, which also illuminate avenues for future research. Although the total sample size was the largest ever, it is still small, and very small for some specific questions. More research needs to be done, and a new meta-analysis will probably be required when more data becomes available. A difficult point, and this is my personal addition, is that as the technology progresses so fast, results might become obsolete very rapidly, unfortunately. Another limitation in the studies analyzed in this paper is that they are largely biased toward college-level students, with very limited data on primary education. Wang and Fan also discuss what AI, data science, and pedagogues should consider in future research. First, they should try to disaggregate effects based on specific LLM versions, a point that is critical because they evolve so fast. Second, they should study how students and teachers typically “prompt” the LLMs, and then investigate the impact of differential prompting on the final learning outcomes. Then, somehow they need to develop and evaluate adaptive scaffolding mechanisms embedded within LLM-based educational tools. Finally, and over a long term, we need to explore the effects of LLM integration on knowledge retention and the development of self-regulated learning skills. Personally, I add at this point, I am of the opinion that studies need to dig more into how students use LLMs to cheat, not necessarily willingly but possibly also by seeking for shortcuts that lead them wrong or allow them to get out of the way but without really learning anything. And in this context, I think AI scientists are falling short in developing camouflaged systems for the detection of AI-generated texts, that they can use to rapidly and confidently tell if, for example, a homework was done with an LLM. Yes, there are some watermarking and similar systems out therebut I haven’t seem them deployed at large in ways that educators can easily utilize. Conclusion: Towards an Evidence-Informed Integration of AI in Education The meta-analysis I’ve covered here for you provides a critical, data-driven contribution to the discourse on AI in education. It confirms the substantial potential of LLMs, particularly ChatGPT in these studies, to enhance student learning performance and positively influence learning perception and higher-order thinking. However, the study also powerfully illustrates that the effectiveness of these tools is not uniform but is significantly moderated by contextual factors and the nature of their integration into the learning process. For the AI and data science community, these findings serve as both an affirmation and a challenge. The affirmation lies in the demonstrated efficacy of LLM technology. The challenge resides in harnessing this potential through thoughtful, evidence-informed design that moves beyond generic applications towards sophisticated, adaptive, and pedagogically sound educational tools. The path forward requires a continued commitment to rigorous research and a nuanced understanding of the complex interplay between AI, pedagogy, and human learning. References Here is the paper by Wang and Fan: The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Jin Wang & Wenxiang Fan Humanities and Social Sciences Communications volume 12, 621 If you liked this, check out my TDS profile. The post What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us appeared first on Towards Data Science. #what #most #detailed #peerreviewed #study
    towardsdatascience.com
    The rapid proliferation and superb capabilities of widely available LLMs has ignited intense debate within the educational sector. On one side they offer students a 24/7 tutor who is always available to help; but then of course students can use LLMs to cheat! I’ve seen both sides of the coin with my students; yes, even the bad side and even at the university level. While the potential benefits and problems of LLMs in education are widely discussed, a critical need existed for robust, empirical evidence to guide the integration of these technologies in the classroom, curricula, and studies in general. Moving beyond anecdotal accounts and rather limited studies, a recent work titled “The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis” offers one of the most comprehensive quantitative assessments to date. The article, by Jin Wang and Wenxiang Fan from the Chinese Education Modernization Research Institute of Hangzhou Normal University, was published this month in the journal Humanities and Social Sciences Communications from the Nature Publishing group. It is as complex as detailed, so here I will delve into the findings reported in it, touching also on the methodology and delving into the implications for those developing and deploying AI in educational contexts. Into it: Quantifying ChatGPT’s Impact on Student Learning The study by Wang and Fan is a meta-analysis that synthesizes data from 51 research papers published between November 2022 and February 2025, examining the impact of ChatGPT on three crucial student outcomes: learning performance, learning perception, and higher-order thinking. For AI practitioners and data scientists, this meta-analysis provides a valuable, evidence-based lens through which to evaluate current LLM capabilities and inform the future development of Education technologies. The primary research question sought to determine the overall effectiveness of ChatGPT across the three key educational outcomes. The meta-analysis yielded statistically significant and noteworthy results: Regarding learning performance, data from 44 studies indicated a large positive impact attributable to ChatGPT usage. In fact it turned out that, on average, students integrating ChatGPT into their learning processes demonstrated significantly improved academic outcomes compared to control groups. For learning perception, encompassing students’ attitudes, motivation, and engagement, analysis of 19 studies revealed a moderately but significant positive impact. This implies that ChatGPT can contribute to a more favorable learning experience from the student’s perspective, despite the a priori limitations and problems associated to a tool that students can use to cheat. Similarly, the impact on higher-order thinking skills—such as critical analysis, problem-solving, and creativity—was also found to be moderately positive, based on 9 studies. It is good news then that ChatGPT can support the development of these crucial cognitive abilities, although its influence is clearly not as pronounced as on direct learning performance. How Different Factors Affect Learning With ChatGPT Beyond overall efficacy, Wang and Fan investigated how various study characteristics affected ChatGPT’s impact on learning. Let me summarize for you the core results. First, there was a strong effect of the type of course. The largest effect was observed in courses that involved the development of skills and competencies, followed closely by STEM (science/Technology) and related subjects, and then by language learning/academic writing. The course’s learning model also played a critical role in modulating how much ChatGPT assisted students. Problem-based learning saw a particularly strong potentiation by ChatGPT, yielding a very large effect size. Personalized learning contexts also showed a large effect, while project-based learning demonstrated a smaller, though still positive, effect. The duration of ChatGPT use was also an important modulator of ChatGPT’s effect on learning performance. Short durations in the order of a single week produced small effects, while extended use over 4–8 weeks had the strongest impact, which did not grow much more if the usage was extended even further. This suggests that sustained interaction and familiarity may be crucial for cultivating positive affective responses to LLM-assisted learning. Interestingly, the students’ grade levels, the specific role played by ChatGPT in the activity, and the area of application did not affect learning performance significantly, in any of the analyzed studies. Other factors, including grade level, type of course, learning model, the specific role adopted by ChatGPT, and the area of application, did not significantly moderate the impact on learning perception. The study further showed that when ChatGPT functioned as an intelligent tutor, providing personalized guidance and feedback, its impact on fostering higher-order thinking was most pronounced. Implications for the Development of AI-Based Educational Technologies The findings from Wang & Fan’s meta-analysis carry substantial implications for the design, development, and strategic deployment of AI in educational settings: First of all, regarding the strategic scaffolding for deeper cognition. The impact on the development of thinking skills was somewhat lower than on performance, which means that LLMs are not inherently cultivators of deep critical thought, even if they do have a positive global effect on learning. Therefore, AI-based educational tools should integrate explicit scaffolding mechanisms that foster the development of thinking processes, to guide students from knowledge acquisition towards higher-level analysis, synthesis, and evaluation in parallel to the AI system’s direct help. Thus, the implementation of AI tools in education must be framed properly, and as we saw above this framing will depend on the exact type and content of the course, the learning model one wishes to apply, and the available time. One particularly interesting setup would be that where the AI tool supports inquiry, hypothesis testing, and collaborative problem-solving. Note though that the findings on optimal duration imply the need for onboarding strategies and adaptive engagement techniques to maximize impact and mitigate potential over-reliance. The superior impact documented when ChatGPT functions as an intelligent tutor highlights a key direction for AI in education. Developing LLM-based systems that can provide adaptive feedback, pose diagnostic and reflective questions, and guide learners through complex cognitive tasks is paramount. This requires moving beyond simple Q&A capabilities towards more sophisticated conversational AI and pedagogical reasoning. On top, there are a few non-minor issues to work on. While LLMs excel at information delivery and task assistance (leading to high performance gains), enhancing their impact on affective domains (perception) and advanced cognitive skills requires better interaction designs. Incorporating elements that foster student agency, provide meaningful feedback, and manage cognitive load effectively are crucial considerations. Limitations and Where Future Research Should Go The authors of the study prudently acknowledge some limitations, which also illuminate avenues for future research. Although the total sample size was the largest ever, it is still small, and very small for some specific questions. More research needs to be done, and a new meta-analysis will probably be required when more data becomes available. A difficult point, and this is my personal addition, is that as the technology progresses so fast, results might become obsolete very rapidly, unfortunately. Another limitation in the studies analyzed in this paper is that they are largely biased toward college-level students, with very limited data on primary education. Wang and Fan also discuss what AI, data science, and pedagogues should consider in future research. First, they should try to disaggregate effects based on specific LLM versions, a point that is critical because they evolve so fast. Second, they should study how students and teachers typically “prompt” the LLMs, and then investigate the impact of differential prompting on the final learning outcomes. Then, somehow they need to develop and evaluate adaptive scaffolding mechanisms embedded within LLM-based educational tools. Finally, and over a long term, we need to explore the effects of LLM integration on knowledge retention and the development of self-regulated learning skills. Personally, I add at this point, I am of the opinion that studies need to dig more into how students use LLMs to cheat, not necessarily willingly but possibly also by seeking for shortcuts that lead them wrong or allow them to get out of the way but without really learning anything. And in this context, I think AI scientists are falling short in developing camouflaged systems for the detection of AI-generated texts, that they can use to rapidly and confidently tell if, for example, a homework was done with an LLM. Yes, there are some watermarking and similar systems out there (which I will cover some day!) but I haven’t seem them deployed at large in ways that educators can easily utilize. Conclusion: Towards an Evidence-Informed Integration of AI in Education The meta-analysis I’ve covered here for you provides a critical, data-driven contribution to the discourse on AI in education. It confirms the substantial potential of LLMs, particularly ChatGPT in these studies, to enhance student learning performance and positively influence learning perception and higher-order thinking. However, the study also powerfully illustrates that the effectiveness of these tools is not uniform but is significantly moderated by contextual factors and the nature of their integration into the learning process. For the AI and data science community, these findings serve as both an affirmation and a challenge. The affirmation lies in the demonstrated efficacy of LLM technology. The challenge resides in harnessing this potential through thoughtful, evidence-informed design that moves beyond generic applications towards sophisticated, adaptive, and pedagogically sound educational tools. The path forward requires a continued commitment to rigorous research and a nuanced understanding of the complex interplay between AI, pedagogy, and human learning. References Here is the paper by Wang and Fan: The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Jin Wang & Wenxiang Fan Humanities and Social Sciences Communications volume 12, 621 (2025) If you liked this, check out my TDS profile. The post What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us appeared first on Towards Data Science.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Build 2025 : Comment Microsoft veut faire basculer le développement logiciel dans l'agentique avec GitHub Copilot

    Le développement logiciel est au coeur des annonces de Microsoft Build 2025. Et ce serait une erreur que de faire l'impasse sur l'une des...
    #build #comment #microsoft #veut #faire
    Build 2025 : Comment Microsoft veut faire basculer le développement logiciel dans l'agentique avec GitHub Copilot
    Le développement logiciel est au coeur des annonces de Microsoft Build 2025. Et ce serait une erreur que de faire l'impasse sur l'une des... #build #comment #microsoft #veut #faire
    Build 2025 : Comment Microsoft veut faire basculer le développement logiciel dans l'agentique avec GitHub Copilot
    www.usine-digitale.fr
    Le développement logiciel est au coeur des annonces de Microsoft Build 2025. Et ce serait une erreur que de faire l'impasse sur l'une des...
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Google talked AI for 2 hours. It didnt mention hallucinations.

    The era of AI Search is officially here.
    Credit: Google

    This year, Google I/O 2025 had one focus: Artificial intelligence.We've already covered all of the biggest news to come out of the annual developers conference: a new AI video generation tool called Flow. A AI Ultra subscription plan. Tons of new changes to Gemini. A virtual shopping try-on feature. And critically, the launch of the search tool AI Mode to all users in the United States.Yet over nearly two hours of Google leaders talking about AI, one word we didn't hear was "hallucination".

    You May Also Like

    Hallucinations remain one of the most stubborn and concerning problems with AI models. The term refers to invented facts and inaccuracies that large-language models "hallucinate" in their replies. And according to the big AI brands' own metrics, hallucinations are getting worse — with some models hallucinating more than 40 percent of the time. But if you were watching Google I/O 2025, you wouldn't know this problem existed. You'd think models like Gemini never hallucinate; you would certainly be surprised to see the warning appended to every Google AI Overview.Mashable Light Speed

    Want more out-of-this world tech, space and science stories?
    Sign up for Mashable's weekly Light Speed newsletter.

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    The closest Google came to acknowledging the hallucination problem came during a segment of the presentation on AI Mode and Gemini's Deep Search capabilities. The model would check its own work before delivering an answer, we were told — but without more detail on this process, it sounds more like the blind leading the blind than genuine fact-checking.For AI skeptics, the degree of confidence Silicon Valley has in these tools seems divorced from actual results. Real users notice when AI tools fail at simple tasks like counting, spellchecking, or answering questions like "Will water freeze at 27 degrees Fahrenheit?"Google was eager to remind viewers that its newest AI model, Gemini 2.5 Pro, sits atop many AI leaderboards. But when it comes to truthfulness and the ability to answer simple questions, AI chatbots are graded on a curve. Gemini 2.5 Pro is Google's most intelligent AI model, yet it scores just a 52.9 percent on the Functionality SimpleQA benchmarking test. According to an OpenAI research paper, the SimpleQA test is "a benchmark that evaluates the ability of language models to answer short, fact-seeking questions."Related Stories

    A Google representative declined to discuss the SimpleQA benchmark, or hallucinations in general — but did point us to Google's official Explainer on AI Mode and AI Overviews. Here's what it has to say:uses a large language model to help answer queries and it is possible that, in rare cases, it may sometimes confidently present information that is inaccurate, which is commonly known as 'hallucination.' As with AI Overviews, in some cases this experiment may misinterpret web content or miss context, as can happen with any automated system in Search...We’re also using novel approaches with the model’s reasoning capabilities to improve factuality. For example, in collaboration with Google DeepMind research teams, we use agentic reinforcement learningin our custom training to reward the model to generate statements it knows are more likely to be accurateand also backed up by inputs.Is Google wrong to be optimistic? Hallucinations may yet prove to be a solvable problem, after all. But it seems increasingly clear from the research that hallucinations from LLMs are not a solvable problem right now. That hasn't stopped companies like Google and OpenAI from sprinting ahead into the era of AI Search — and that's likely to be an error-filled era, unless we're the ones hallucinating.

    Timothy Beck Werth
    Tech Editor

    Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. Tim has over 15 years of experience as a journalist and editor, and he has particular experience covering and testing consumer technology, smart home gadgets, and men’s grooming and style products. Previously, he was the Managing Editor and then Site Director of SPY.com, a men's product review and lifestyle website. As a writer for GQ, he covered everything from bull-riding competitions to the best Legos for adults, and he’s also contributed to publications such as The Daily Beast, Gear Patrol, and The Awl.Tim studied print journalism at the University of Southern California. He currently splits his time between Brooklyn, NY and Charleston, SC. He's currently working on his second novel, a science-fiction book.
    #google #talked #hours #didnt #mention
    Google talked AI for 2 hours. It didnt mention hallucinations.
    The era of AI Search is officially here. Credit: Google This year, Google I/O 2025 had one focus: Artificial intelligence.We've already covered all of the biggest news to come out of the annual developers conference: a new AI video generation tool called Flow. A AI Ultra subscription plan. Tons of new changes to Gemini. A virtual shopping try-on feature. And critically, the launch of the search tool AI Mode to all users in the United States.Yet over nearly two hours of Google leaders talking about AI, one word we didn't hear was "hallucination". You May Also Like Hallucinations remain one of the most stubborn and concerning problems with AI models. The term refers to invented facts and inaccuracies that large-language models "hallucinate" in their replies. And according to the big AI brands' own metrics, hallucinations are getting worse — with some models hallucinating more than 40 percent of the time. But if you were watching Google I/O 2025, you wouldn't know this problem existed. You'd think models like Gemini never hallucinate; you would certainly be surprised to see the warning appended to every Google AI Overview.Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! The closest Google came to acknowledging the hallucination problem came during a segment of the presentation on AI Mode and Gemini's Deep Search capabilities. The model would check its own work before delivering an answer, we were told — but without more detail on this process, it sounds more like the blind leading the blind than genuine fact-checking.For AI skeptics, the degree of confidence Silicon Valley has in these tools seems divorced from actual results. Real users notice when AI tools fail at simple tasks like counting, spellchecking, or answering questions like "Will water freeze at 27 degrees Fahrenheit?"Google was eager to remind viewers that its newest AI model, Gemini 2.5 Pro, sits atop many AI leaderboards. But when it comes to truthfulness and the ability to answer simple questions, AI chatbots are graded on a curve. Gemini 2.5 Pro is Google's most intelligent AI model, yet it scores just a 52.9 percent on the Functionality SimpleQA benchmarking test. According to an OpenAI research paper, the SimpleQA test is "a benchmark that evaluates the ability of language models to answer short, fact-seeking questions."Related Stories A Google representative declined to discuss the SimpleQA benchmark, or hallucinations in general — but did point us to Google's official Explainer on AI Mode and AI Overviews. Here's what it has to say:uses a large language model to help answer queries and it is possible that, in rare cases, it may sometimes confidently present information that is inaccurate, which is commonly known as 'hallucination.' As with AI Overviews, in some cases this experiment may misinterpret web content or miss context, as can happen with any automated system in Search...We’re also using novel approaches with the model’s reasoning capabilities to improve factuality. For example, in collaboration with Google DeepMind research teams, we use agentic reinforcement learningin our custom training to reward the model to generate statements it knows are more likely to be accurateand also backed up by inputs.Is Google wrong to be optimistic? Hallucinations may yet prove to be a solvable problem, after all. But it seems increasingly clear from the research that hallucinations from LLMs are not a solvable problem right now. That hasn't stopped companies like Google and OpenAI from sprinting ahead into the era of AI Search — and that's likely to be an error-filled era, unless we're the ones hallucinating. Timothy Beck Werth Tech Editor Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. Tim has over 15 years of experience as a journalist and editor, and he has particular experience covering and testing consumer technology, smart home gadgets, and men’s grooming and style products. Previously, he was the Managing Editor and then Site Director of SPY.com, a men's product review and lifestyle website. As a writer for GQ, he covered everything from bull-riding competitions to the best Legos for adults, and he’s also contributed to publications such as The Daily Beast, Gear Patrol, and The Awl.Tim studied print journalism at the University of Southern California. He currently splits his time between Brooklyn, NY and Charleston, SC. He's currently working on his second novel, a science-fiction book. #google #talked #hours #didnt #mention
    Google talked AI for 2 hours. It didnt mention hallucinations.
    mashable.com
    The era of AI Search is officially here. Credit: Google This year, Google I/O 2025 had one focus: Artificial intelligence.We've already covered all of the biggest news to come out of the annual developers conference: a new AI video generation tool called Flow. A $250 AI Ultra subscription plan. Tons of new changes to Gemini. A virtual shopping try-on feature. And critically, the launch of the search tool AI Mode to all users in the United States.Yet over nearly two hours of Google leaders talking about AI, one word we didn't hear was "hallucination". You May Also Like Hallucinations remain one of the most stubborn and concerning problems with AI models. The term refers to invented facts and inaccuracies that large-language models "hallucinate" in their replies. And according to the big AI brands' own metrics, hallucinations are getting worse — with some models hallucinating more than 40 percent of the time. But if you were watching Google I/O 2025, you wouldn't know this problem existed. You'd think models like Gemini never hallucinate; you would certainly be surprised to see the warning appended to every Google AI Overview. ("AI responses may include mistakes".) Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! The closest Google came to acknowledging the hallucination problem came during a segment of the presentation on AI Mode and Gemini's Deep Search capabilities. The model would check its own work before delivering an answer, we were told — but without more detail on this process, it sounds more like the blind leading the blind than genuine fact-checking.For AI skeptics, the degree of confidence Silicon Valley has in these tools seems divorced from actual results. Real users notice when AI tools fail at simple tasks like counting, spellchecking, or answering questions like "Will water freeze at 27 degrees Fahrenheit?"Google was eager to remind viewers that its newest AI model, Gemini 2.5 Pro, sits atop many AI leaderboards. But when it comes to truthfulness and the ability to answer simple questions, AI chatbots are graded on a curve. Gemini 2.5 Pro is Google's most intelligent AI model (according to Google), yet it scores just a 52.9 percent on the Functionality SimpleQA benchmarking test. According to an OpenAI research paper, the SimpleQA test is "a benchmark that evaluates the ability of language models to answer short, fact-seeking questions." (Emphasis ours.) Related Stories A Google representative declined to discuss the SimpleQA benchmark, or hallucinations in general — but did point us to Google's official Explainer on AI Mode and AI Overviews. Here's what it has to say:[AI Mode] uses a large language model to help answer queries and it is possible that, in rare cases, it may sometimes confidently present information that is inaccurate, which is commonly known as 'hallucination.' As with AI Overviews, in some cases this experiment may misinterpret web content or miss context, as can happen with any automated system in Search...We’re also using novel approaches with the model’s reasoning capabilities to improve factuality. For example, in collaboration with Google DeepMind research teams, we use agentic reinforcement learning (RL) in our custom training to reward the model to generate statements it knows are more likely to be accurate (not hallucinated) and also backed up by inputs.Is Google wrong to be optimistic? Hallucinations may yet prove to be a solvable problem, after all. But it seems increasingly clear from the research that hallucinations from LLMs are not a solvable problem right now. That hasn't stopped companies like Google and OpenAI from sprinting ahead into the era of AI Search — and that's likely to be an error-filled era, unless we're the ones hallucinating. Timothy Beck Werth Tech Editor Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. Tim has over 15 years of experience as a journalist and editor, and he has particular experience covering and testing consumer technology, smart home gadgets, and men’s grooming and style products. Previously, he was the Managing Editor and then Site Director of SPY.com, a men's product review and lifestyle website. As a writer for GQ, he covered everything from bull-riding competitions to the best Legos for adults, and he’s also contributed to publications such as The Daily Beast, Gear Patrol, and The Awl.Tim studied print journalism at the University of Southern California. He currently splits his time between Brooklyn, NY and Charleston, SC. He's currently working on his second novel, a science-fiction book.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Start Saving Some Serious Money With These Early Memorial Day Tech Deals

    Memorial Day presents another outstanding opportunity to find outstanding sales on some of PCMag's top-rated tech and favorite picks. But neither we nor the major retailers out there could wait until the holiday weekend to unleash those amazing deals. We're going to be updating this story throughout the week/end with the best discounts we can find on headphones, tablets, smartphones, TVs, and every important piece of tech we can find at low, low prices. Laptop DealsA sharp 14-inch AMOLED touch screen, Intel Ultra 5 CPU, and 16GB of RAM make this Samsung Galaxy Book4 Pro a tool of choice for students and working professionals. If you already have Galaxy devices, you can use them in conjunction with the Galaxy Book4 Pro to extend the display and start a project on one device, then seamlessly continue working on another one. This price drop brings the cost down below a grand for a premium computing experience that will be a thrill to Galaxy owners.Click over to our review of the Best Laptops of 2025 for comparison. Robot Vacuum DealsThe 360-degree lidar vision creates a precision map of your home. It detects and avoids objects, so you don’t have to spend time picking up before this Shark IQ robot vacuum gets to work. The bagless self-empty base can hold up to 60 days of dirt and debris, and HEPA filtration traps 99.97% of allergens. Thanks to a multisurface brush roll that captures fur, dust, and dander, it's perfect for pet owners. You can get a whopping 50% off this robot vacuum and then kick your feet up while it makes your cleaning routing that much easier.See how we got down and dirty with our reviews of the Best Robot Vacuums of 2025 for more options. Headphone and Earbud DealsThese are some "Good" headphones! At least that's what out audio expert, Tim Gideon, said in his 2023 review of the Beats Studio Pro when he lauded it for delivering punchy bass with bright highs, its comfortable fit, as well as its premium accessories. These are built for bass lovers, that's for sure. At their retail price, they're not quite as good as the Bose QuietComfort 45, but we're talking sales here and with a discount that brings the cost just under these become quite the steal. Your ears win, your wallet wins, everybody wins here. Recommended by Our EditorsListen up, because we've got even more Early Memorial Day Headphone and Earbud Deals for you. Speakers DealsIn our review of the JBL Clip 4, our expert enjoyed the crisp, rich audio and fully waterproof design. Those characteristics remain the same in the JBL Clip 5, which holds an impressive 4.8 rating on Amazon. Portable and powerful, this speaker brings the jams wherever you roam, and you can enjoy your music, podcasts, and whatever else via Bluetooth connection for up to 10 hours, depending on volume level. Pick one up with a significant 38% price reduction, and don't forget it comes in several colors, so when you use the built-in carabiner to attach it to your bag or belt loop, it will look stylish, to boot.Pump up the bass with more Early Memorial Day Speaker Deals.Phone Deals"The Google Pixel 9 Pro Fold is an outstanding phone from top to bottom. Google has done an excellent job redesigning the hardware, which is improved in every way," says our expert, Eric Zeman. In our review, we shouted out everything from the size, shape, and weight to the larger and brighter screens to the faster Tensor G4 processor. These reasons all added up to rate this new model as "Excellent." AI features include some interesting "camera tricks" for taking, editing, and sharing your photos. The retail price for an unlocked model is steep, but a discount makes that much more manageable. Home Security DealsThis indoor-outdoor home security camera earned our rare "Outstanding" rating and an Editors' Choice award for affordable cams. And right now it's on sale for only What? You need to know more than that? But of course! This affordable bundle includes sharp 2K video and a weatherproof design with built-in spotlights. Plus you're getting features such as intelligent motion detection, color night vision, and local and cloud storage options that are usually found on more expensive models. If this is your first foray into the home security camera market, you can't go wrong with this deal on Tapo TP-Link 2K. Keep a few extra bucks in your wallet while keeping things safe with these Early Memorial Day Home Security Deals.
    #start #saving #some #serious #money
    Start Saving Some Serious Money With These Early Memorial Day Tech Deals
    Memorial Day presents another outstanding opportunity to find outstanding sales on some of PCMag's top-rated tech and favorite picks. But neither we nor the major retailers out there could wait until the holiday weekend to unleash those amazing deals. We're going to be updating this story throughout the week/end with the best discounts we can find on headphones, tablets, smartphones, TVs, and every important piece of tech we can find at low, low prices. Laptop DealsA sharp 14-inch AMOLED touch screen, Intel Ultra 5 CPU, and 16GB of RAM make this Samsung Galaxy Book4 Pro a tool of choice for students and working professionals. If you already have Galaxy devices, you can use them in conjunction with the Galaxy Book4 Pro to extend the display and start a project on one device, then seamlessly continue working on another one. This price drop brings the cost down below a grand for a premium computing experience that will be a thrill to Galaxy owners.Click over to our review of the Best Laptops of 2025 for comparison. Robot Vacuum DealsThe 360-degree lidar vision creates a precision map of your home. It detects and avoids objects, so you don’t have to spend time picking up before this Shark IQ robot vacuum gets to work. The bagless self-empty base can hold up to 60 days of dirt and debris, and HEPA filtration traps 99.97% of allergens. Thanks to a multisurface brush roll that captures fur, dust, and dander, it's perfect for pet owners. You can get a whopping 50% off this robot vacuum and then kick your feet up while it makes your cleaning routing that much easier.See how we got down and dirty with our reviews of the Best Robot Vacuums of 2025 for more options. Headphone and Earbud DealsThese are some "Good" headphones! At least that's what out audio expert, Tim Gideon, said in his 2023 review of the Beats Studio Pro when he lauded it for delivering punchy bass with bright highs, its comfortable fit, as well as its premium accessories. These are built for bass lovers, that's for sure. At their retail price, they're not quite as good as the Bose QuietComfort 45, but we're talking sales here and with a discount that brings the cost just under these become quite the steal. Your ears win, your wallet wins, everybody wins here. Recommended by Our EditorsListen up, because we've got even more Early Memorial Day Headphone and Earbud Deals for you. Speakers DealsIn our review of the JBL Clip 4, our expert enjoyed the crisp, rich audio and fully waterproof design. Those characteristics remain the same in the JBL Clip 5, which holds an impressive 4.8 rating on Amazon. Portable and powerful, this speaker brings the jams wherever you roam, and you can enjoy your music, podcasts, and whatever else via Bluetooth connection for up to 10 hours, depending on volume level. Pick one up with a significant 38% price reduction, and don't forget it comes in several colors, so when you use the built-in carabiner to attach it to your bag or belt loop, it will look stylish, to boot.Pump up the bass with more Early Memorial Day Speaker Deals.Phone Deals"The Google Pixel 9 Pro Fold is an outstanding phone from top to bottom. Google has done an excellent job redesigning the hardware, which is improved in every way," says our expert, Eric Zeman. In our review, we shouted out everything from the size, shape, and weight to the larger and brighter screens to the faster Tensor G4 processor. These reasons all added up to rate this new model as "Excellent." AI features include some interesting "camera tricks" for taking, editing, and sharing your photos. The retail price for an unlocked model is steep, but a discount makes that much more manageable. Home Security DealsThis indoor-outdoor home security camera earned our rare "Outstanding" rating and an Editors' Choice award for affordable cams. And right now it's on sale for only What? You need to know more than that? But of course! This affordable bundle includes sharp 2K video and a weatherproof design with built-in spotlights. Plus you're getting features such as intelligent motion detection, color night vision, and local and cloud storage options that are usually found on more expensive models. If this is your first foray into the home security camera market, you can't go wrong with this deal on Tapo TP-Link 2K. Keep a few extra bucks in your wallet while keeping things safe with these Early Memorial Day Home Security Deals. #start #saving #some #serious #money
    Start Saving Some Serious Money With These Early Memorial Day Tech Deals
    me.pcmag.com
    Memorial Day presents another outstanding opportunity to find outstanding sales on some of PCMag's top-rated tech and favorite picks. But neither we nor the major retailers out there could wait until the holiday weekend to unleash those amazing deals. We're going to be updating this story throughout the week/end with the best discounts we can find on headphones, tablets, smartphones, TVs, and every important piece of tech we can find at low, low prices. Laptop DealsA sharp 14-inch AMOLED touch screen, Intel Ultra 5 CPU, and 16GB of RAM make this Samsung Galaxy Book4 Pro a tool of choice for students and working professionals. If you already have Galaxy devices, you can use them in conjunction with the Galaxy Book4 Pro to extend the display and start a project on one device, then seamlessly continue working on another one. This $470 price drop brings the cost down below a grand for a premium computing experience that will be a thrill to Galaxy owners.Click over to our review of the Best Laptops of 2025 for comparison. Robot Vacuum DealsThe 360-degree lidar vision creates a precision map of your home. It detects and avoids objects, so you don’t have to spend time picking up before this Shark IQ robot vacuum gets to work. The bagless self-empty base can hold up to 60 days of dirt and debris, and HEPA filtration traps 99.97% of allergens. Thanks to a multisurface brush roll that captures fur, dust, and dander, it's perfect for pet owners. You can get a whopping 50% off this robot vacuum and then kick your feet up while it makes your cleaning routing that much easier.See how we got down and dirty with our reviews of the Best Robot Vacuums of 2025 for more options. Headphone and Earbud DealsThese are some "Good" headphones! At least that's what out audio expert, Tim Gideon, said in his 2023 review of the Beats Studio Pro when he lauded it for delivering punchy bass with bright highs, its comfortable fit, as well as its premium accessories. These are built for bass lovers, that's for sure. At their retail price, they're not quite as good as the Bose QuietComfort 45, but we're talking sales here and with a $150 discount that brings the cost just under $200, these become quite the steal. Your ears win, your wallet wins, everybody wins here. Recommended by Our EditorsListen up, because we've got even more Early Memorial Day Headphone and Earbud Deals for you. Speakers DealsIn our review of the JBL Clip 4, our expert enjoyed the crisp, rich audio and fully waterproof design. Those characteristics remain the same in the JBL Clip 5, which holds an impressive 4.8 rating on Amazon. Portable and powerful, this speaker brings the jams wherever you roam, and you can enjoy your music, podcasts, and whatever else via Bluetooth connection for up to 10 hours, depending on volume level. Pick one up with a significant 38% price reduction, and don't forget it comes in several colors, so when you use the built-in carabiner to attach it to your bag or belt loop, it will look stylish, to boot.Pump up the bass with more Early Memorial Day Speaker Deals.Phone Deals"The Google Pixel 9 Pro Fold is an outstanding phone from top to bottom. Google has done an excellent job redesigning the hardware, which is improved in every way," says our expert, Eric Zeman. In our review, we shouted out everything from the size, shape, and weight to the larger and brighter screens to the faster Tensor G4 processor. These reasons all added up to rate this new model as "Excellent." AI features include some interesting "camera tricks" for taking, editing, and sharing your photos. The retail price for an unlocked model is steep, but a $300 discount makes that much more manageable. Home Security DealsThis indoor-outdoor home security camera earned our rare "Outstanding" rating and an Editors' Choice award for affordable cams. And right now it's on sale for only $25. What? You need to know more than that? But of course! This affordable bundle includes sharp 2K video and a weatherproof design with built-in spotlights. Plus you're getting features such as intelligent motion detection, color night vision, and local and cloud storage options that are usually found on more expensive models. If this is your first foray into the home security camera market, you can't go wrong with this deal on Tapo TP-Link 2K. Keep a few extra bucks in your wallet while keeping things safe with these Early Memorial Day Home Security Deals.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Google is baking Gemini AI into Chrome

    Microsoft famously brought its Copilot AI to the Edge browser in Windows. Now Google is doing the same with Chrome.
    In a list of announcements that spanned dozens of pages, Google allocated just a single line to the announcement: “Gemini is coming to Chrome, so you can ask questions while browsing the web.”
    Google later clarified what Gemini on Chrome can do: “This first version allows you to easily ask Gemini to clarify complex information on any webpage you’re reading or summarize information,” the company said in a blog post. “In the future, Gemini will be able to work across multiple tabs and navigate websites on your behalf.”
    Other examples of what Gemini can do involves coming up with personal quizzes based on material in the Web page, or altering what the page suggests, like a recipe. In the future, Google plans to allow Gemini in Chrome to work on multiple tabs, navigate within Web sites, and automate tasks.

    Google said that you’ll be able to either talk or type commands to Gemini. To access it, you can use the Alt+G shortcut in Windows.
    Gemini is to Chrome what Copilot is to Edge
    Within Edge, Microsoft uses Copilot to summarize documents and web pages, as well as answer questions about the content within them. Gemini will do the same, but well after Microsoft integrated Copilot — a possible reason for downplaying the announcement.
    Google is also using “Gemini” as a catch-all for various AI functions, much in the same way that Microsoft uses Copilot as an AI brand. Google, like Microsoft, is pushing agentic AI, where various bits of autonomous AI work independently to pursue tasks.
    Google calls this “Project Mariner,” and will appear in mobile and on the desktop. It may appear as “Agent Mode,” so a task like searching for an apartment in Austin could be broken down into tasks like searching rental agencies , and so on. It will be coming soon, to “subscribers,” Google chief executive Sundar Pichai said in a keynote address at its Google I/O developer conference on Tuesday. Those will include Google’s AI Pro as well as the new /mo Google Ultra subscription.
    You’ll see Gemini appear in Chrome as early as this week, Google executives said — on May 21, a representative clarified. However, you’ll need to be a Gemini subscriber to take advantage of its features, a requirement that Microsoft does not apply with Copilot for Edge. Otherwise, Google will let those who participate in the Google Chrome Beta, Dev, and Canary programs test it out.
    Updated at 12:09 PM PT with additional detail
    #google #baking #gemini #into #chrome
    Google is baking Gemini AI into Chrome
    Microsoft famously brought its Copilot AI to the Edge browser in Windows. Now Google is doing the same with Chrome. In a list of announcements that spanned dozens of pages, Google allocated just a single line to the announcement: “Gemini is coming to Chrome, so you can ask questions while browsing the web.” Google later clarified what Gemini on Chrome can do: “This first version allows you to easily ask Gemini to clarify complex information on any webpage you’re reading or summarize information,” the company said in a blog post. “In the future, Gemini will be able to work across multiple tabs and navigate websites on your behalf.” Other examples of what Gemini can do involves coming up with personal quizzes based on material in the Web page, or altering what the page suggests, like a recipe. In the future, Google plans to allow Gemini in Chrome to work on multiple tabs, navigate within Web sites, and automate tasks. Google said that you’ll be able to either talk or type commands to Gemini. To access it, you can use the Alt+G shortcut in Windows. Gemini is to Chrome what Copilot is to Edge Within Edge, Microsoft uses Copilot to summarize documents and web pages, as well as answer questions about the content within them. Gemini will do the same, but well after Microsoft integrated Copilot — a possible reason for downplaying the announcement. Google is also using “Gemini” as a catch-all for various AI functions, much in the same way that Microsoft uses Copilot as an AI brand. Google, like Microsoft, is pushing agentic AI, where various bits of autonomous AI work independently to pursue tasks. Google calls this “Project Mariner,” and will appear in mobile and on the desktop. It may appear as “Agent Mode,” so a task like searching for an apartment in Austin could be broken down into tasks like searching rental agencies , and so on. It will be coming soon, to “subscribers,” Google chief executive Sundar Pichai said in a keynote address at its Google I/O developer conference on Tuesday. Those will include Google’s AI Pro as well as the new /mo Google Ultra subscription. You’ll see Gemini appear in Chrome as early as this week, Google executives said — on May 21, a representative clarified. However, you’ll need to be a Gemini subscriber to take advantage of its features, a requirement that Microsoft does not apply with Copilot for Edge. Otherwise, Google will let those who participate in the Google Chrome Beta, Dev, and Canary programs test it out. Updated at 12:09 PM PT with additional detail #google #baking #gemini #into #chrome
    Google is baking Gemini AI into Chrome
    www.pcworld.com
    Microsoft famously brought its Copilot AI to the Edge browser in Windows. Now Google is doing the same with Chrome. In a list of announcements that spanned dozens of pages, Google allocated just a single line to the announcement: “Gemini is coming to Chrome, so you can ask questions while browsing the web.” Google later clarified what Gemini on Chrome can do: “This first version allows you to easily ask Gemini to clarify complex information on any webpage you’re reading or summarize information,” the company said in a blog post. “In the future, Gemini will be able to work across multiple tabs and navigate websites on your behalf.” Other examples of what Gemini can do involves coming up with personal quizzes based on material in the Web page, or altering what the page suggests, like a recipe. In the future, Google plans to allow Gemini in Chrome to work on multiple tabs, navigate within Web sites, and automate tasks. Google said that you’ll be able to either talk or type commands to Gemini. To access it, you can use the Alt+G shortcut in Windows. Gemini is to Chrome what Copilot is to Edge Within Edge, Microsoft uses Copilot to summarize documents and web pages, as well as answer questions about the content within them. Gemini will do the same, but well after Microsoft integrated Copilot — a possible reason for downplaying the announcement. Google is also using “Gemini” as a catch-all for various AI functions, much in the same way that Microsoft uses Copilot as an AI brand. Google, like Microsoft, is pushing agentic AI, where various bits of autonomous AI work independently to pursue tasks. Google calls this “Project Mariner,” and will appear in mobile and on the desktop. It may appear as “Agent Mode,” so a task like searching for an apartment in Austin could be broken down into tasks like searching rental agencies , and so on. It will be coming soon, to “subscribers,” Google chief executive Sundar Pichai said in a keynote address at its Google I/O developer conference on Tuesday. Those will include Google’s AI Pro as well as the new $250/mo Google Ultra subscription. You’ll see Gemini appear in Chrome as early as this week, Google executives said — on May 21, a representative clarified. However, you’ll need to be a Gemini subscriber to take advantage of its features, a requirement that Microsoft does not apply with Copilot for Edge. Otherwise, Google will let those who participate in the Google Chrome Beta, Dev, and Canary programs test it out. Updated at 12:09 PM PT with additional detail
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • RTX 5080 Super rumored with 24GB of memory — Same 10,752 CUDA cores as the vanilla variant with a 400W+ TGP

    Nvidia reportedly has an RTX 5080 Super in the making that ups the memory capacity by 50% over the base model from 16GB to 24GB.
    #rtx #super #rumored #with #24gb
    RTX 5080 Super rumored with 24GB of memory — Same 10,752 CUDA cores as the vanilla variant with a 400W+ TGP
    Nvidia reportedly has an RTX 5080 Super in the making that ups the memory capacity by 50% over the base model from 16GB to 24GB. #rtx #super #rumored #with #24gb
    RTX 5080 Super rumored with 24GB of memory — Same 10,752 CUDA cores as the vanilla variant with a 400W+ TGP
    www.tomshardware.com
    Nvidia reportedly has an RTX 5080 Super in the making that ups the memory capacity by 50% over the base model from 16GB to 24GB.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • These are the new AI features coming to Google Search

    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    These are the new AI features coming to Google Search

    Aditya Tiwari

    Neowin
    @TheLazyAvenger ·

    May 20, 2025 17:32 EDT

    Google is back with a truckload of announcements at the Google I/O 2025 developer conference. This year's I/O is primarily focused on AI, as Google hosted a separate event to talk about several Android updates. New AI features are coming to Google Search. Moving on from past blunders, the search giant calls AI Overviews "one of the most successful launches in Search in the past decade."
    For starters, AI Mode is now rolling out to everyone living in the US, and no Labs sign-up is required. The feature is now powered by a custom version of Gemini 2.5, similar to the updated AI Overviews.
    Google is beefing up AI Mode with Deep Search, which uses the same query fan-out technique but raises it several notches. The feature can perform hundreds of searches to refer to multiple sources and generate a fully cited report within minutes, the company said.

    Project Astra's live capabilities are also coming to Google Search to improve its multimodal experience. Similar to Gemini Live, a feature called Search Live will let you have back-and-forth conversations with Google Search about what you see using your phone's camera.
    AI Mode will soon be updated to offer personalized suggestions based on past searches. You will get the option to connect other Google apps like Gmail. For instance, AI Mode will be able to suggest events near your stay, based on your hotel and flight bookings.
    Agentic capabilities in AI Mode will help with time-consuming tasks like booking event tickets, restaurant reservations, and local appointments, where you need to browse multiple sites to check prices and other details. Google said it will partner with companies like Ticketmaster, Resy, StubHub, and Vagaro to improve the agentic AI experience in Google Search.
    Apart from that, AI Mode will be updated to analyze complex datasets and create custom charts and interactive graphs for your sports and finance queries. A new shopping experience in AI mode will help with various stages, such as finding new products and trying clothes on yourself virtually.
    Google said these new features will be available to Labs users for AI Mode in the coming weeks and months.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #these #are #new #features #coming
    These are the new AI features coming to Google Search
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. These are the new AI features coming to Google Search Aditya Tiwari Neowin @TheLazyAvenger · May 20, 2025 17:32 EDT Google is back with a truckload of announcements at the Google I/O 2025 developer conference. This year's I/O is primarily focused on AI, as Google hosted a separate event to talk about several Android updates. New AI features are coming to Google Search. Moving on from past blunders, the search giant calls AI Overviews "one of the most successful launches in Search in the past decade." For starters, AI Mode is now rolling out to everyone living in the US, and no Labs sign-up is required. The feature is now powered by a custom version of Gemini 2.5, similar to the updated AI Overviews. Google is beefing up AI Mode with Deep Search, which uses the same query fan-out technique but raises it several notches. The feature can perform hundreds of searches to refer to multiple sources and generate a fully cited report within minutes, the company said. Project Astra's live capabilities are also coming to Google Search to improve its multimodal experience. Similar to Gemini Live, a feature called Search Live will let you have back-and-forth conversations with Google Search about what you see using your phone's camera. AI Mode will soon be updated to offer personalized suggestions based on past searches. You will get the option to connect other Google apps like Gmail. For instance, AI Mode will be able to suggest events near your stay, based on your hotel and flight bookings. Agentic capabilities in AI Mode will help with time-consuming tasks like booking event tickets, restaurant reservations, and local appointments, where you need to browse multiple sites to check prices and other details. Google said it will partner with companies like Ticketmaster, Resy, StubHub, and Vagaro to improve the agentic AI experience in Google Search. Apart from that, AI Mode will be updated to analyze complex datasets and create custom charts and interactive graphs for your sports and finance queries. A new shopping experience in AI mode will help with various stages, such as finding new products and trying clothes on yourself virtually. Google said these new features will be available to Labs users for AI Mode in the coming weeks and months. Tags Report a problem with article Follow @NeowinFeed #these #are #new #features #coming
    These are the new AI features coming to Google Search
    www.neowin.net
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. These are the new AI features coming to Google Search Aditya Tiwari Neowin @TheLazyAvenger · May 20, 2025 17:32 EDT Google is back with a truckload of announcements at the Google I/O 2025 developer conference. This year's I/O is primarily focused on AI, as Google hosted a separate event to talk about several Android updates. New AI features are coming to Google Search. Moving on from past blunders, the search giant calls AI Overviews "one of the most successful launches in Search in the past decade." For starters, AI Mode is now rolling out to everyone living in the US, and no Labs sign-up is required. The feature is now powered by a custom version of Gemini 2.5, similar to the updated AI Overviews. Google is beefing up AI Mode with Deep Search, which uses the same query fan-out technique but raises it several notches. The feature can perform hundreds of searches to refer to multiple sources and generate a fully cited report within minutes, the company said. Project Astra's live capabilities are also coming to Google Search to improve its multimodal experience. Similar to Gemini Live, a feature called Search Live will let you have back-and-forth conversations with Google Search about what you see using your phone's camera. AI Mode will soon be updated to offer personalized suggestions based on past searches. You will get the option to connect other Google apps like Gmail. For instance, AI Mode will be able to suggest events near your stay, based on your hotel and flight bookings. Agentic capabilities in AI Mode will help with time-consuming tasks like booking event tickets, restaurant reservations, and local appointments, where you need to browse multiple sites to check prices and other details. Google said it will partner with companies like Ticketmaster, Resy, StubHub, and Vagaro to improve the agentic AI experience in Google Search. Apart from that, AI Mode will be updated to analyze complex datasets and create custom charts and interactive graphs for your sports and finance queries. A new shopping experience in AI mode will help with various stages, such as finding new products and trying clothes on yourself virtually. Google said these new features will be available to Labs users for AI Mode in the coming weeks and months. Tags Report a problem with article Follow @NeowinFeed
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
CGShares https://cgshares.com