Your employer's AI is scoring your 'engagement' from Slack replies and typing rhythm. You've never seen your score. Here's what's happening in HR black boxes.
Hyle Editorial·
Your employer's HR software is scoring your 'engagement' based on how fast you reply to Slack messages and whether your typing rhythm has changed. You have never seen your score. This isn't speculative fiction—systems like Workday, SAP SuccessFactors, and HireVue are already deployed across Fortune 500 companies, silently ingesting data from your calendar, email, communication platforms, and even your keystroke patterns to generate performance predictions that influence promotions and terminations. According to a 2023 Gartner survey, 54% of HR leaders reported using some form of AI-driven talent analytics, yet fewer than 20% of employees at those organizations knew such systems existed.
The modern HR technology stack has evolved far beyond simple performance reviews. Today's platforms integrate with virtually every digital touchpoint of an employee's work life. Workday's Skills Cloud, for instance, continuously infers capabilities from project assignments and communication patterns. SAP SuccessFactors analyzes meeting attendance, document collaboration frequency, and response times to generate what the industry euphemistically calls "engagement indicators."
HireVue, originally known for video interviewing, now offers continuous assessment tools that claim to predict employee success by analyzing thousands of behavioral data points. The company's patent portfolio reveals ambitions to measure everything from facial microexpressions during video calls to linguistic patterns in written communications.
[!INSIGHT] The term "employee engagement" has been weaponized by HR tech vendors. What was once measured through annual surveys is now calculated in real-time using proxies that may have no validated correlation with actual job performance or satisfaction.
The Data Inputs You Never Consented To Share
Consider the data streams feeding these systems:
Communication Velocity: Slack and Microsoft Teams metadata—including response times, message frequency, and active hours—is harvested to build "responsiveness profiles."
Calendar Behavior: Meeting acceptance rates, calendar density, and even the timing of scheduling changes contribute to "collaboration scores."
Digital Body Language: Some platforms, particularly those integrated with productivity monitoring tools like ActivTrak or TimeDoctor, analyze keystroke dynamics—the rhythm and pressure of typing—to detect "anomalies" that might indicate burnout or disengagement.
Network Analysis: Organizational network analysis (ONA) tools map who communicates with whom, identifying "influencers" and "isolators" based on communication patterns.
A 2024 investigation by The Markup found that at least 8 of the top 10 U.S. employers use some form of continuous performance monitoring, yet transparency varies dramatically. Most employee handbooks contain vague language about "data analytics" without specifying that algorithms are generating scores that affect career trajectories.
The Black Box Problem: When Algorithms Decide Your Future
The fundamental issue isn't that data is being collected—it's that the interpretation of that data is completely opaque to the people most affected by it. When Workday's machine learning models generate a "flight risk" score predicting which employees might leave, or when an algorithm flags someone as a "low contributor" based on communication patterns, the employee has no way to understand, contest, or correct that assessment.
“*"The lack of explainability in HR AI systems creates a fundamental power asymmetry. Employees are being judged by criteria they cannot see, against standards they cannot understand, through processes they cannot appeal.”
— Ifeoma Ajunwa, Professor of Law and Director of the AI Decision-Making Research Program, UNC School of Law
This opacity has real consequences. In 2023, a former IBM employee filed a class action lawsuit alleging that AI-driven performance assessments systematically disadvantaged workers over 40. The case highlighted how algorithmic bias—whether from training data or proxy variables—can violate anti-discrimination laws while hiding behind the veneer of objective data analysis.
[!INSIGHT] Proxy discrimination occurs when an algorithm uses seemingly neutral variables (like tenure or commute distance) that correlate with protected characteristics (like age or disability). Because the algorithm's decision-making process is opaque, this discrimination is nearly impossible to detect without subpoenaed internal documents.
The Workday-HireVue Ecosystem: A Case Study in Opacity
Workday serves over 10,000 organizations representing 60 million users. Its "Talent Optimization" module uses machine learning to suggest internal mobility, identify skills gaps, and flag performance concerns. The company's marketing materials emphasize "democratizing talent decisions," but the actual algorithmic criteria remain proprietary trade secrets.
HireVue's assessment tools have faced particular scrutiny. In 2021, the company discontinued facial analysis in its video interviewing product after researchers demonstrated that these systems exhibited bias against non-white candidates and people who didn't maintain consistent eye contact with cameras. Yet HireVue's broader assessment tools—which analyze language patterns, response structures, and behavioral indicators—continue to operate with limited transparency.
GDPR Article 22: The Right You Didn't Know You Had
The European Union's General Data Protection Regulation includes a provision that seems designed exactly for this problem. Article 22 grants individuals the right "not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."
This means EU employees should, in theory, have the right to:
Obtain human intervention in automated decisions
Express their point of view and contest the decision
Receive meaningful information about the logic involved
However, the gap between legal right and practical enforcement is enormous.
[!NOTE] GDPR's Article 22 applies only to decisions based "solely" on automated processing. HR departments have learned to add a human sign-off step, however cursory, which may technically exempt their processes from this protection. This "human in the loop" loophole undermines the regulation's intent.
The Enforcement Reality Check
A 2024 study by the European Trade Union Institute found that of 1,500 workers surveyed across EU countries who suspected they were subject to algorithmic management:
Only 3% had successfully exercised Article 22 rights
67% were unaware such rights existed
89% reported that their employers had never disclosed the use of AI in performance evaluation
The problem is compounded by the fact that most HR tech contracts include confidentiality clauses that prevent companies from disclosing the specifics of algorithmic systems—framing proprietary algorithms as trade secrets that trump employee transparency rights.
In the United States, the regulatory landscape is even more fragmented. The Equal Employment Opportunity Commission (EEOC) has issued guidance on algorithmic discrimination, and Illinois passed the Artificial Intelligence Video Interview Act (2019), requiring disclosure and consent for AI-analyzed video interviews. But these address narrow use cases, leaving the broader ecosystem of continuous performance monitoring unregulated.
The Stakes: What Happens When We Can't See Our Scores
The implications of algorithmic HR extend beyond individual fairness concerns. When performance assessment becomes a black box, several systemic problems emerge:
Gaming Over Performance: When employees don't know what's measured, they can't optimize for actual performance—only for visible proxies. This creates perverse incentives: someone might prioritize rapid Slack responses over thoughtful work, or attend unnecessary meetings to boost "collaboration scores."
Feedback Loop Failure: Effective performance improvement requires understanding the gap between current and desired performance. Black box systems deny employees the information they need to improve, while still penalizing them for unspecified deficiencies.
Institutionalized Bias: Without transparency, algorithmic bias—whether from training data, proxy variables, or flawed assumptions—becomes invisible and therefore uncorrectable. A system that learned from historical promotion patterns will replicate historical discrimination, but do so with the imprimatur of "objective data analysis."
Erosion of Trust: Research from MIT Sloan Management Review (2023) found that employees who perceive performance evaluation as unfair or opaque demonstrate 37% lower engagement and 23% higher turnover intention—ironically undermining the very metrics these systems claim to measure.
Conclusion
The transformation of employee performance assessment from human conversation to algorithmic calculation represents a fundamental shift in workplace power dynamics. When an algorithm you cannot see, using criteria you cannot understand, generates a score you cannot access—yet that score influences decisions about your career—you have lost meaningful agency over your professional life.
The technology isn't inherently malevolent. Proponents argue that algorithmic assessment can reduce human bias, identify overlooked talent, and provide more frequent feedback than annual review cycles allow. These benefits are possible—but only if the systems operate with transparency, accountability, and genuine employee consent.
Key Takeaway The current trajectory of HR AI—powerful algorithms operating in regulatory shadows with minimal employee awareness—represents a crisis of workplace democracy. Whether through expanded GDPR enforcement, new legislation, or collective bargaining, the principle must be established: any algorithm that affects your employment should be explainable to you, contestable by you, and visible to you. Your performance score may already exist. The question is whether you have any right to see it.
Sources: Gartner HR Survey 2023; The Markup investigation "The Permanent Temp Workers" (2024); European Trade Union Institute study on algorithmic management (2024); Ifeoma Ajunwa, "The Quantified Worker" (2023); MIT Sloan Management Review research on employee trust (2023); EEOC AI and Algorithmic Fairness Initiative guidance; IBM age discrimination class action filing (2023)
This is a Premium Article
Hylē Media members get unlimited access to all premium content. Sign up free — no credit card required.