Editor's note: Posts published on From the Workplace are written by outside contributors and do not necessarily reflect the views or opinions of SHRM.
Managers are increasingly turning to generative AI (GenAI) before difficult conversations to draft performance feedback, rehearse mediation scripts, and analyze their tone — creating both opportunity and risk for HR leaders.
AI can make managers more prepared, but it can also quietly introduce cultural blind spots that escalate the very conflicts it aims to resolve. Performance conversations that land as culturally tone-deaf due to the influence of AI cause misunderstandings to spiral, undermining inclusion efforts.
A 2025 MIT Sloan study published in Nature Human Behaviour found that AI models exhibit measurably different cultural tendencies depending on the language used in prompts. For example, English elicited individualistic, analytical responses, while Chinese prompts produced more interdependent, holistic thinking.
Developers also train most large language models predominantly on a handful of languages, embedding Western communicative norms as the default. AI does not understand power dynamics, organizational history, or lived cultural nuance. It predicts patterns from its training data. When that data skews Western and individualistic, so do the outputs.
For HR teams managing diverse or global workforces, this gap is not theoretical. An AI-polished termination script, a PIP drafted with Western directness, or a mediation framework that ignores face-saving norms can fracture trust — and create legal exposure. To navigate this tension, HR teams need a more intentional approach. Here are three strategies to use AI more intentionally in HR conflict work:
1. Leverage Prompt Personas with Cultural Framing
Don’t just use AI to script your own talking points. Ask it to roleplay each stakeholder with their cultural context. Because AI is a pattern-matching system, the personas and language you specify shape which cultural norms are activated. An HR business partner preparing a corrective action conversation might prompt: “I'm an HR business partner preparing to coach a manager before a corrective action conversation with a team MEMBER from Brazil. Roleplay the conversation from both perspectives — the manager delivering the feedback, and the employee receiving it. Reflect Brazilian workplace norms around relationship-building, warmth, and the importance of personal connection before addressing performance. Identify where the conversation could go wrong without that context.” That simulation builds cultural empathy that no generic template can replicate.
2. Build Holistic Context into Every Prompt
Vague prompts generate culturally generic results. HR leaders must train managers to override AI’s Western defaults by specifying the cultural context: power dynamics, hierarchy, relationship history, and communication norms. Research on Persian taarof — a sophisticated system of ritual politeness and indirectness — found that leading AI models navigated these norms correctly only 34% to 42% of the time and mostly defaulted to Western directness.
Here is a stronger prompt: “Draft talking points for a performance conversation with an employee from [COUNTRY] who has been here ten years. Reflect norms around [deference/indirect feedback], preserve the working relationship, and anticipate likely emotional responses.”
3. Keep a Culturally Fluent Human in the Loop
AI is a rehearsal partner, not a final authority. Before HR teams use AI-generated scripts in high-stakes situations — investigations, terminations, and mediations — someone fluent in the relevant language and cultural norms should review the output. One effective approach: roleplay the conversation with a culturally knowledgeable colleague or coach, then use AI as an observer to surface blind spots afterward. And never input personally identifiable employee information into public AI tools, which is a data governance risk that HR policy must address explicitly./
As generative AI (GenAI) embeds itself in management workflows, HR must move from informal experimentation to intentional governance. Cultural bias deserves the same policy attention as data privacy. Pair AI’s analytical power with cultural intelligence, and it becomes a genuine asset. Leave cultural context out, and it becomes a liability.
Ellen Kim is an independent mediator, arbitrator, and career/executive coach at Santa Clara University’s Graduate School of Business.
Mo-Yun Lei Fong is an adjunct lecturer at Stanford University’s Management Science and Engineering Department and a Harvard Business School career and executive coach.
Was this resource helpful?