In a growing trend across workplaces, a notable share of managers are turning to artificial intelligence tools—like ChatGPT, Microsoft Copilot, and others—to guide, and in some cases directly decide, high-stakes personnel actions such as hiring, promotion, raises, and even terminations. A survey by ResumeBuilder found that 78% of managers use AI to determine raises, 77% for promotions, 66% for layoffs, and 64% for terminations, while more than one in five say they “frequently” allow AI to make final decisions without human input. Experts warn that without proper training, AI-led decision-making risks introducing bias, lacking empathy, and exposing organizations to legal liability under EEO and discrimination laws.
Sources: Epoch Times, HRDive.com, NatLaw Review
Key Takeaways
– Widespread AI Use in Personnel Decisions: A majority of managers are using AI to shape or make decisions on raises (78%), promotions (77%), layoffs (66%), and terminations (64%)—many with minimal to no human oversight.
– Ethical and Legal Pitfalls: With only around one-third of AI-using managers receiving formal ethical training—and many receiving none—these practices carry risks of algorithmic bias, discrimination, and potential legal exposure under federal employment laws.
– AI Is Shaping Workplaces, But Without Empathy: AI may bring efficiency to HR workflows, but experts emphasize its lack of empathy and context, underscoring that it should support—not replace—human judgment in sensitive employment matters.
In-Depth
Artificial intelligence is becoming an increasingly central actor in workplace decision-making, extending beyond routine automation to influence—and sometimes dictate—major personnel moves. Recent data from a ResumeBuilder.com survey reveals that at least two-thirds of managers lean on AI tools when considering raises (78%), promotions (77%), layoffs (66%), and terminations (64%). Alarmingly, more than one in five admit that AI frequently makes these decisions without any human intervention.
While AI can deliver efficiency and data-driven insights, its growing autonomy in people matters is sobering. Most managers using these tools have received little to no formal training in ethical deployment, leaving critical choices—about livelihoods and futures—vulnerable to flawed algorithms, biased outputs, or misinterpretation. Legal experts caution this trend risks running afoul of anti-discrimination laws enforced by agencies like the EEOC, with real examples already emerging of AI systems unintentionally sidelining protected groups or reinforcing historical imbalances.
Despite its power, AI lacks two essential human qualities: nuance and empathy. It may process resumes faster or detect patterns overlooked by humans, but it cannot grasp personal context, individual growth trajectories, or the unquantifiable human factors that often inform key judgments. As such, experts urge caution—a framework in which AI assists rather than decides, paired with robust oversight, training, and governance. Without such guardrails, organizations risk undermining trust, fostering unfair outcomes, and facing regulatory or reputational fallout.
In short, the momentum behind AI in HR and management is undeniable—but so is the need for responsible, human-centered deployment.

