Front-line health care workers protect and care for others, prioritizing their patients' health and welfare over their own safety. However, as they do so, they face a staggering risk that affects workforce stability. Health care and social assistance workers are five times more likely than employees overall to experience a workplace violence-related injury, according to the U.S. Bureau of Labor Statistics.
What’s more, the data indicate that violence against health care workers is rising, with cases of nonfatal injuries nearly doubling between 2011 and 2018. In a 2025 survey by National Nurses United, 82% of nurses said they experienced at least one episode of workplace violence within the past year, and nearly half reported an increase in workplace violence in their unit compared to the previous year.
Research suggests that emergency and trauma department workers face an exceptionally high risk of exposure to trauma. Long wait times and strained resources in these departments, coupled with uncertainty among patients and their family members, can incite anger and frustration, leading to violence against nurses and staff.
Maintaining workplace safety is not only a clinical and security challenge, but also a crucial concern for human resources leaders. “This is ultimately about the safety and well-being of the workforce, so involvement from HR is critical,” said Scott Snyder, an expert in AI adoption and chief digital officer at EVERSANA. Working in a high-risk setting takes a toll on health care workers, fueling anxiety and burnout, increasing turnover, and impacting workforce confidence and patient safety. This creates a financial burden, as well. According to the American Hospital Association, staffing costs related to absenteeism, turnover, and loss of productivity due to workplace violence exceeded $513 million in U.S. hospitals in 2023.
Artificial Intelligence as a Potential Safeguard
Leadership can optimize security, nursing staff ratios, and response training to mitigate the risk of workplace violence; however, a vital missing piece is the ability to predict and prevent high-risk situations before they escalate into dangerous ones. Artificial intelligence is emerging as a game changer for health care organizations in the prevention of violence.
Organizations can harness the technology to predict violent situations by drawing on patient histories, previous encounters, and publicly available information. Security cameras, microphones, and other AI tools can detect unusual behaviors by analyzing real-time cues, such as overcrowding or long wait times, and by identifying aggressive or confrontational behavior in patients or bystanders.
As part of its AI safety and security program, one Los Angeles-area hospital implemented solutions such as gun-detection technology, real-time aggressive-behavior detection, and a facial recognition system to identify disgruntled or fired staff, as well as “Be on the Lookout” reports from law enforcement. Once AI-driven technology flags a high-risk situation, it can guide staff to intervene and de-escalate the situation in accordance with established protocols.
AI technology is evolving quickly in the area of workplace safety. “It can’t make final judgment-based decisions, but it can alert staff to risk, provide guidance, and recommend the best course of action during a crisis situation,” Snyder said. He added that the technology becomes more accurate when it is trained using relevant internal data.
With proper training, AI models are also significantly more accurate at predicting future violent behavior than trained experts such as psychiatrists, according to researchers at the University of Washington and Johns Hopkins University. They trained deep learning models on clinical notes to predict which patients might become violent toward health care workers within the next three days and found that the AI models correctly predicted 7 to 8 out of 10 violent events, while human experts predicted only 5 out of 10.
HR’s Leadership Role
The use of AI in health care is contentious due to concerns about privacy, bias, liability, and the technology’s potential to replace security or other staff. Moving from theory to AI implementation requires careful consideration, input, and oversight from multiple departments. As facilitators of workforce culture, policy, training, and regulatory compliance, HR departments are uniquely positioned to take on leadership roles across several key areas.
Policy and Governance
Before implementation, HR should lead the creation of formal, comprehensive policies that outline how the organization plans to utilize AI technology and what types of data the AI tool can access and retain. This can include clinical notes, audio and video recordings, and incident reports — and, importantly, HR needs to ensure that the private data AI is trained on remains confidential. Additionally, policies should cover:
- Consent and privacy protections for patients and employees.
- Compliance with local and national regulatory and privacy laws.
- Roles and permissions, including who has access to alerts and deployment decisions and who can override AI decisions.
Protocol Development
“Establishing guardrails and protocols in these solutions can help minimize biases and errors,” Snyder said. “Moreover, it’s essential to have clear guidelines that keep humans in the loop to avoid accidentally labeling someone a threat or taking inappropriate action.” He advises developing protocols that detail:
- When and how AI alerts are escalated.
- Which staff have access to alerts, as well as review and oversight authority.
- Response procedures.
- Systems to protect patient rights while ensuring staff safety.
- The documentation and review of AI-identified interventions.
Training and Workforce Readiness
Communication and training are vital before, during, and after AI implementation. Snyder stresses that staff must understand what the technology can and cannot do and how it protects the workforce. “HR can ensure that training includes hard skills — how to respond using AI — and soft skills to effectively task and evaluate the AI output,” he said. Ongoing training should include:
- Competency assessments.
- Refresher courses.
- Mechanisms for staff feedback on system performance and usability.
Strengthening Safety Through Human-Centered AI
Human resources departments play a crucial role in conveying that this technology is designed to enhance workplace safety without replacing personnel. By leading this effort, HR can foster health care environments where caregivers can focus on healing others without worrying about their own safety.