
How AI-enabled bossware is being used to track and evaluate your work
www.computerworld.com
Employee monitoring software, also called bossware and tattleware, is increasingly being used to track and manage employees remotely via a business network or by using desktop software.And now, bossware vendors are injecting artificial intelligence (AI) tools into their products, shifting the employee monitoring software from basic tracking to something more granular that can offer deeper, more actionable insights and even play a role in layoffs.In a survey last year, online privacy and security provider ExpressVPN said it found 61% of companies are using AI-powered analytics to track and evaluate employee performance. Employee monitoring tools can increase efficiency with features such as facial recognition, predictive analytics, and real-time feedback for workers, allowing them to better prioritize tasks and even prevent burnout. When AI is added, the software can be used to track activity patterns, flag unusual behavior, and analyze communication for signs of stress or dissatisfaction, according to analysts and industry experts. It also generates productivity reports, classifies activities, and detects policy violations.In fact, were now seeing employers track physical spaces with tools, including video surveillance (69%) and badge-based entry/exit tracking (58%), as companies demand employees return to the office, said Lauren Hendry Parsons, ExpressVPNs privacy advocate.Overall, remote employee monitoring is now at an all-time high. Depending on the software being used, AI-infused bossware can perform:Activity Tracking & Behavior Analysis: AI flags unusual behavior such as excessive time on non-work tasks or changes in typing patterns.Sentiment Analysis: Communication is monitored for signs of stress or dissatisfaction.Automated Report Generation: AI compiles data into productivity reports with insights and recommendations.Data Categorization: Activities are classified as productive or not to help managers focus on key areas.Facial Recognition & Biometric Monitoring: Attendance and engagement are tracked through AI-driven facial recognition.Automatic Policy Violation Detection: Policy breaches like accessing inappropriate sites are flagged.Automated Scheduling & Task Allocation: AI optimizes task assignments based on employee strengths.VeriatoSome AI productivity tools track data such as hours online, emails sent, and other activities to give employees a score on their work. How managers interpret that score varies; some might rely on it at face value, while others, especially in office settings, might depend more on their own judgment and qualitative insights about an employee.Other concerns arise when companies use the scores to make employment decisions, such as layoffs, without considering the full context behind the data, according to Pegah Moradi, a workplace automation researcher and PhD candidate at Cornell University.Companies can use extensive data on employees to make decisions, creating an imbalance of power since employees dont have access to the same data about themselves. With the rise of remote work post-COVID, attention tracking has become common, where employees are logged out or flagged for being idle too long, Moradi said.The role of AI in monitoringWhile managers have always used metrics to assess employee performance, AI tools can now consolidate those metrics into a single score thats harder to interpret. This trend is growing, partly due to the availability of large language models (LLMs), according to Moradi.LLMs are often used in predicting employee behaviors, including the risk of quitting, unionizing, or other actions, Moradi said. However, their role is mostly in analyzing personal communications, such as emails or messages. That can be tricky, because interpreting messages across different people can lead to incorrect inferences about someones job performance.If an algorithm causes someone to be laid off, legal recourse for bias or other issues with the decision-making process is unclear, and it raises important questions about accountability in algorithmic decisions, she said.The problem, Moradi explained, is that while AI can make bossware more efficient and insightful, the data being collected by LLMs is obfuscated. So, knowing the way that these decisions [like layoffs] are made are obscured by these, like, black boxes, Moradi said.Technology worker rights organizations argue remote employee monitoring produces more negative results than positive. TheElectronic Frontier Foundation(EFF), which originated the term bossware, has denounced employee monitoring software as aviolation of privacy. TheCenter for Democracy and Technology(CDT) has denounced bossware as a threat to the safety and health of employees.Matt Scherer, CDTs senior policy counsel for Workers Rights and Technology, said there is considerable anecdotal evidence that the use of these tools has increased over the past 10 years as data has become more valuable particularly in the years since COVID-19 led to an increase in remote work.Bossware is also unhealthy for workers because it discourages breaks, enforces a faster work pace, and reduces downtime; those can combine to increase the risk of physical injuries from job strain and mental health issues, according to Scherer. There also appears to be an increase in the number of companies using systems in ways that threaten workers legal rights, such as bydisrupting the right to organize, he said. But to me, the most troubling thing is that we just plain dont know how common these surveillance systems are, which employers are using them, or which workers are being affected.Hudson Hongo, a spokesman for the the digital rights advocacy group Electronic Frontier Foundation, argued that most bossware is punitive and meant to penalize workers. He agreed it can jeopardize employee health and place workers privacy and security at risk.Workers have legal and contractual rights that protect them against privacy violations, wrongful termination, and other unjust treatment including actions guided by automated decision-making (ADM) systems, Hongo said. While ADM vendors may promise employers increased efficiency or more objective decision-making, these systems are frequently faulty, have repeatedly demonstrated bias, and may not be aware of relevant local and state laws and regulations.Many tech vendors add generative AI (genAI) to performance management systems, promising time savings and more objective, data-driven evaluations, according to Gartner Research. However, adoption is limited, with 52% of human resources reporting no interest in AI for performance management. HR leaders are often hesitant to adopt the technology due to its novelty and unresolved compliance concerns, according to Gartner HR Director Analyst Laura Gardiner.But by 2027, 30% of organizations will provide targeted training for managers on how to contextualize AI-generated performance feedback to increase manager skill in the responsible use of GenAI, according to Gardiner.Is regulation the answer?Several states, including California, Illinois, New Jersey, New York, and Vermont, have proposed laws regulating automated tools in hiring, firing, and compensation, according to the Center for Labor and A Fair Economy at Harvard Law School.A 2023 Massachusetts bill sought to address automated decision-making and worker data from surveillance, with a private right of action for workers. Workers should be able to appeal or correct decisions made by automated systems. Most such measures require impact assessments, which should be conducted by an independent, third party. The bills call for employers to provide timely access to assessment results, including relevant information, when AI or automated systems affect employment.The Massachusetts proposal has not yet been enacted into law. In fact, Scherer said, there are no laws in the United States that place hard limits on what types of surveillance employers can conduct on workers or on what types of data they can collect. In the workplace, violations of employee privacy do not usually raise the specter of lawsuits, which is one of the reasons stronger regulation is needed as worker surveillance tools become more common, Scherer said.Not all bossware is for employee performance monitoring. Theres a security rationale, too, because a top cause of data breaches often involves employees, either intentional or unintentional. According to a Verizon study, 82% of breaches are the result of employee errors or insecure acts, and up to 68% involved non-malicious human errors, such as inadvertent actions or falling for social engineering scams.To prevent data breaches, IT organizations use employee monitoring software to track illegal activities, protect confidential information, and watch for insider threats.A matter of trustWhen approached with care, employee surveillance can improve operations without compromising dignity or trust. The key is recognizing that its not just about technology, but the balance of power between employers and employees, said ExpressVPNs Parsons.When monitoring shifts from a tool for productivity to an invasive form of spying, it creates distrust, stifles creativity, and breeds resentment, she argued. Employers need to reflect on whether their monitoring systems may unintentionally damage morale.For example, instead of using real-time monitoring or biometric tracking, employers could focus on measuring outcomes. That would give workers more autonomy, fostering a positive and productive work environment, Parsons said.There are ways in which those kinds of tools can be used to ensure fairness, to ensure equal treatment, to ensure inclusivity, said David Brodeur-Johnson, employee experience research leadat Forrester.Imagine, for example, that a large organization gathers data on employee sentiment, tone, and interactions in an anonymous, aggregated way to track overall mood over time. Or, after announcing a major corporate change that could create uncertainty, anxiety, or mistrust, business leaders can analyze the data to see how employees are reacting what theyre worried about or where confusion exists, Brodeur-Johnson said.This insight helps leadership adjust messaging, clarify priorities, and offer more support, he said. However, theres a risk this data could also be used in unethical ways.Several prominent companies, Brodeur-Johnson said, use AI-enhanced tools to evaluate employee performance and time engaged in work, including ActiveTrak, Microsoft Viva, Desk Time and Sapience Analytics. Others include, Veriato (owned by Awareness Technologies), Time Doctor, Snapsoft, Hubstaff, and Teramind.Fighting employee-monitoring mythsAwareness Technologies CEO Elizabeth Harz said theres a lot of misinformation and myths about the employee monitoring software industry. Most of it is based in fear of new innovations. Her companys Veriato employee monitoring software is no different than sales-tracking platforms that companies like Salesforce.com offers, she said it just spans a greater number of business use cases.Companies are responsible for protecting their most valuable assets people and data regardless of hybrid or remote work, she said. Monitoring software ensures theyre also operating in safe work environments, protected from harassment, and that customer data is safeguarded to prevent misuse.These responsibilities have existed for decades, but now technology offers modern tools to manage them more effectively, just like how sales-tracking tools like Salesforce became standard, Harz said. I could defend any type of automation thats happened in the last 20 or 30 years, and we could see the benefits that have come out it. And this area is no different. I think in five years, it will be extremely commonplace.VeriatoBusinesses use Veriatos software to monitor employee actions on company-issued devices; its known as User Activity Monitoring (UAM) and includes smartphones, tablets, laptops and desktops. UAM helps with two main goals: improving productivity and managing insider risk, she said.For example, if an employee is copying a customers Social Security Number or other personally identifiable information and pasting it into a Word document, it will be flagged by Veriato and sent to a manager in ordr to stop the activity in real time. The employee may not realize that theyre compromising a customers data. Theyre just trying to work faster, Harz said.The software can also be used to track an employees hours at work, which can be used to create efficiencies. For example, if one employee works 38 hours a week and another works 80 and both accomplish the same amount of work the softwares data can be used to find ways to reduce wasted effort.The data can also be used to decide who should be laid off, which Harz said isnt any different than if a manager noticed poor performance over time. And so if someone gets an alert that says, a lawyer is spending half their time on Gmail, and somebody digs in there, they can see forensic-level screen grabs and show what that employee was doing. And they can share that with the employee, she said. If someone is not working 80% of the day, theyre watching cat videos on YouTube, they probably should go find a different job.Exactly whats being watched by employee monitoring software and services depends on the platform and its privacy features. Some automatically remove personally identifiable information before analysis, ensuring ethical data use. For example, Microsoft focuses on protecting identity and analyzing only aggregate data. Trust and safeguards are built in to prevent misuse, Brodeur Johnson said.Ironically, the increasing use of AI-enhanced monitoring software has also given rise to ways to game the software, according to Moradi. Youre seeing the rise of these sorts of systems to likea mouse jiggler [that] jiggles your mouth every so often, she said. I have a friend who always has a YouTube video playing on her computer because that shows shes online.I find it kind of interesting there are all these little ways that people are resisting this sort of tracking in order to reclaim their own autonomy, Moradi said.
0 Comments
·0 Shares
·38 Views