Upgrade to Pro

WWW.NEOWIN.NET
Using generative AI at work makes co-workers question your competence, study claims
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Using generative AI at work makes co-workers question your competence, study claims David Uzondu Neowin · May 9, 2025 04:00 EDT Image by Yan Krukau via Pexels Recently Google came out with a report sharing findings from a study where customer service agents used an optional conversational AI assistant. Google found this tool could significantly boost productivity, reporting that agents using the AI saw an average 14% increase in efficiency. This gain, according to Google's calculations, could save a full-time worker approximately 122 hours per year, surpassing Google's initial estimate. The study also noted the AI had a particularly large impact on lower-performing agents, helping them handle more difficult tasks and boosting their output by 35%, compared to a more modest 7% gain for higher performers. However, these potential efficiency gains may come with a hidden social cost, according to a new study from Duke University, published recently in the Proceedings of the National Academy of Sciences. This research claims that despite AI's productivity benefits, using tools like ChatGPT, Claude, or Gemini might lead to your coworkers and managers viewing you as less competent. The research, titled "Evidence of a social evaluation penalty for using AI," involved four experiments with over 4,400 participants. Researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll from Duke's Fuqua School of Business found a consistent pattern: employees who use AI tools tend to face negative judgments from colleagues and managers regarding their competence and motivation. In the first experiment, participants imagined using either an AI tool or a standard dashboard creation tool for a work task. Those in the AI group anticipated being seen as lazier, less competent, less diligent, and more replaceable. They also indicated they would be less willing to tell their managers or colleagues about their AI use. A supplemental study replicated these core findings, showing that participants in the AI condition expected significantly lower competence ratings from colleagues compared to those using a dashboard tool. In the table below, higher scores indicate a more positive outcome for "Disclosure," "Competence," and Diligence. On the other hand, higher scores for "Lazy" and "Replaceable" represent a more negative outcome. The second experiment seemed to confirm these anxieties. When participants evaluated descriptions of employees, those who received help from AI were consistently rated as lazier, less competent, less diligent, less independent, and less self-assured compared to those who got similar help from non-AI sources or no help at all. Another supplemental study investigated whether these social penalties changed if AI use was described as common versus uncommon in the workplace. Interestingly, the perceived norm of AI use did not significantly alter these negative social evaluations, suggesting the penalty is quite robust. The researchers also found that this bias can influence real-world business decisions. In a hiring simulation, managers who did not personally use AI frequently were less likely to hire candidates who reported regular AI tool use. Conversely, managers who were frequent AI users themselves showed a preference for AI-using candidates. This aligns with findings from another part of the study, indicating that the perception of laziness in an AI-using candidate is stronger among evaluators who themselves use AI less frequently. The final experiment identified perceived laziness as a primary driver for this negative evaluation. However, this penalty could be lessened if the AI tool was clearly beneficial and appropriate for the specific task at hand. When AI use made obvious sense for the job, the negative perceptions were significantly reduced. For example, the study detailed that for manual tasks, using AI had a negative direct effect on perceived task fit, even beyond the laziness factor. In contrast, for digital tasks where AI could be seen as more useful, AI use had a positive direct effect on perceived task fit, which helped to partially counteract the negative impact of perceived laziness. Tags Report a problem with article Follow @NeowinFeed
·28 Views