Irony alert: Anthropic says applicants shouldnt use LLMs
arstechnica.com
Eating your own dog food Irony alert: Anthropic says applicants shouldnt use LLMs We agree with Anthropic: People shouldn't use its AI to hide bad communication skills. Kyle Orland Feb 4, 2025 2:09 pm | 23 Nothing to see here, just a human applicant for a human job Credit: Getty Images Nothing to see here, just a human applicant for a human job Credit: Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreWhen you look at the "customer stories" page on Anthropic's website, you'll find plenty of corporations reportedly using Anthropic's Claude LLM to help employees communicate more effectively. When it comes to Anthropic's own employee recruitment process, though, the company politely asks users to "please ... not use AI assistants," so that Anthropic can evaluate their "non-AI-assisted communication skills."The ironic application clausewhich comes before a "Why do you want to work here?" question in most of Anthropic's current job postingswas recently noticed by AI researcher Simon Willison. But the request appears on most of Anthropic's job postings at least as far back as last May, according to Internet Archive captures."While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process," Anthropic writes on its online job applications. "We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills."The inherent hypocrisy here highlights the precarious doublethink corporations like Anthropic must maintain around AI's role in the workplace these days. LLMs are amazing tools that can supercharge employee productivity and help them communicate difficult concepts more effectively, Anthropic and many other tech companies argue. But also, employees who rely on AI tools might be hiding some personal deficiency that we should know about.AI for thee, not for meOn one hand, we can see why Anthropic would include this request for human-authored applications. When evaluating a job applicant's "Why do you want to work here?" statement, you want to be sure you're seeing the applicant's own thoughts, not some computer-generated pabulum from a complex network that has been called a "plagiarism machine" by some of the authors whose work was allegedly used without permission as training data. You're evaluating these applicants for their skill at getting their unique viewpoint across, not for their skill at prompting an AI to mimic that process.On the other hand, Anthropic itself is quick to sell its LLMs as a way for other companies to help employees who might have a little trouble communicating without AI assistance. At Asian AI aggregator WRTN, for instance, Anthropic highlights how Claude "helps users enhance their written communication with more natural, polished language," describing the very type of use case that would apparently get an Anthropic job application thrown out. The same tools that are helping companies like Pulpit with their writing are apparently bad for writing Anthropic job applications. Credit: Pulpit / Anthropic The same tools that are helping companies like Pulpit with their writing are apparently bad for writing Anthropic job applications. Credit: Pulpit / Anthropic There are plenty of other Anthropic-assisted communication examples across the company's customer stories pages. Brand.ai reportedly uses Claude to allow "one copywriter to manage 600 pieces of content... all while preserving the human touch that makes brands special." Otter reportedly uses Claude to help teams "engage in targeted, topic-specific discussions with both colleagues and AI, ensuring seamless and focused communication." Pulpit AI reportedly uses Claude to help pastors "communicate with and reach their congregation and local community" by "turn[ing] sermons into multiple pieces of content." We could go on.Anthropic seems to have no problem with its own employees using Claude in similar ways once they're actually hiredas the application itself says, the company "encourage[s] people to use AI systems during their role to help them work faster and more effectively." So why can't a job applicant use the same tools to more effectively convey their desire to work at the company?This is why we writeIn large part, the discrepancy has to do with the point of human-authored writing itself. More than just a utilitarian way to get information across, most pieces of writing also provide a crucial window into the author's feelings, beliefs, and thinking process. These are the things a recruiter is trying to glean from a written answer on a job application, and also the kinds of things that can be obscured by the use of homogenizing AI tools.Then again, maybe looking at resumes and written applications is simply outdated in the AI era. Anthropic's customers page also highlights AI recruitment startup Skillfully, which it says uses Claude to "identify candidates on the basis of demonstrated skills..." Please do not use our magic writing button when applying for a job with our company. Thanks! Credit: Getty Images Please do not use our magic writing button when applying for a job with our company. Thanks! Credit: Getty Images "Traditional hiring practices face a credibility crisis," Anthropic writes with no small amount of irony when discussing Skillfully. "In today's digital age, candidates can automatically generate and submit hundreds of perfectly tailored applications with the click of a button, making it hard for employers to identify genuine talent beneath punched up paper credentials.""Employers are frustrated by resume-driven hiring because applicants can use AI to rewrite their resumes en masse," Skillfully CEO Brett Waikart says in Anthropic's laudatory write-up.Wow, that does sound really frustrating! I wonder what kinds of companies are pushing the technology that enables those kinds of "punched up paper credentials" to flourish. It sure would be a shame if Anthropic's own hiring process was impacted by that technology.Trust me, Im a humanThe real problem for Anthropic and other job recruiters, as Skillfully's story highlights, is that it's almost impossible to detect which applications are augmented using AI tools and which are the product of direct human thought. Anthropic likes to play up this fact in other contexts, noting Claude's "warm, human-like tone" in an announcement or calling out the LLM's "more nuanced, richer traits" in a blog post, for instance.A company that fully understands the inevitability (and undetectability) of AI-assisted job applications might also understand that a written "Why I want to work here?" statement is no longer a useful way to effectively differentiate job applicants from one another. Such a company might resort to more personal or focused methods for gauging whether an applicant would be a good fit for a role, whether or not that employee has access to AI tools.Anthropic, on the other hand, has decided to simply resort to politely asking potential employees to please not use its premiere product (or any competitor's) when applying, if they'd be so kind. There's something about the way this applicant writes that I can't put my finger on... Credit: Aurich Lawson | Getty Images There's something about the way this applicant writes that I can't put my finger on... Credit: Aurich Lawson | Getty Images Anthropic says it engenders "an unusually high trust environment" among its workers, where they "assume good faith, disagree kindly, and prioritize honesty. We expect emotional maturity and intellectual openness." We suppose this means they trust their applicants not to use undetectable AI tools that Anthropic itself would be quick to admit can help people who struggle with their writing (Anthropic has not responded to a request for comment from Ars Technica).Still, we'd hope a company that wants to "prioritize honesty" and "intellectual openness" would be honest and open about how its own products are affecting the role and value of all sorts of written communicationincluding job applications. We're already living in the heavily AI-mediated world that companies like Anthropic have created, and it would be nice if companies like Anthropic started to act like it.Kyle OrlandSenior Gaming EditorKyle OrlandSenior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 23 Comments
0 Comments
·0 Shares
·44 Views