Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

Data, AI & the New Diligence

The leaders of the Data & Trust Alliance—a coalition of global companies building responsible AI practices—discuss the critical role that HR leaders will play in this new era.


AI robot using cyber security to protect information privacy . Futuristic concept of cybercrime prevention by artificial intelligence and machine learning process . 3D rendering illustration .

As with the internet, artificial intelligence has rapidly emerged as a driver of enterprise transformation. Seemingly every C-suite leader is assessing, piloting and increasingly deploying data- and AI-powered applications. But unlike prior technology-driven shifts when unintended consequences sometimes took years to appear, the risks and potential harms of AI are already the subject of intense debate and scrutiny.

CHROs understand this. For years, human resource functions have used data and algorithms to support decisions about hiring, promoting and retaining talent. And HR leaders are well aware of the dangers of algorithmic bias in legacy datasets, which have already drawn regulatory focus. This, no doubt, is the most visible aspect of the AI phenomenon with which HR professionals are grappling.

Creating tools to detect, mitigate and monitor algorithmic bias in workforce decisions was the first project of our consortium, the Data & Trust Alliance (D&TA). Our not-for-profit comprises 25 leading corporations and institutions, and it was formed by CEOs to develop and adopt responsible data and AI practices.

If algorithmic bias were a CHRO’s only AI-driven challenge, it would be plenty. But the reality is that bias is only the beginning.

‘Seismic Shift’ and New Risks

The critical fault line in this coming technology revolution is the interface between human intelligence and artificial intelligence, as augmentation of human skills and work evolves toward collaboration between humans and intelligent systems. That means the business professionals with “human” in their title will play an increasingly pivotal role in how and where their firms leverage AI—designing new job roles, redesigning existing roles and career paths, and reskilling entire workforces. They will become ever greater determiners of their organizations’ success or failure.

Consider the critical decision to invest in or acquire other companies.

AI investments along with mergers and acquisitions (M&A) are growing rapidly across all sectors of our economy. Although M&A activity in the entire tech sector weakened during the past two pandemic-impacted years, it has continued rapidly in AI. The aggregate value of global M&A transactions related to AI grew 23 percent to $75 billion in 2023 compared with a year earlier, according to S&P Global Market Intelligence. And consulting firm WTW reported that it expects AI to drive a “seismic shift” in M&A deal-making in 2024.

Despite that, the emerging value drivers and risks of data and AI are not well assessed by traditional M&A diligence.

For instance, of the 169 publicly reported AI failures in the AI Incident Database, only 18 percent resulted from risks already assessed by traditional due diligence: privacy violations; security breaches and vulnerabilities; and unauthorized decisions. The remaining 72 percent came from risks that existing due diligence criteria don’t consider, such as faulty performance of AI; risk of physical danger; lack of transparency and accountability; and, yes, algorithmic discrimination.

Obviously, these problems require the attention of the functions typically responsible for M&A—primarily finance and legal. However, the most critical area requiring scrutiny isn’t the target firm’s balance sheet, but its culture. In other words, diligence for data and AI M&A will increasingly become a core discipline for HR.

Business professionals with ‘human’ in their title will play an increasingly pivotal role in how and where their firms leverage AI—designing new job roles, redesigning existing roles and career paths, and reskilling entire workforces. 

 

Responsible AI Diligence

When the chief legal officer of a D&TA member company shared how an AI-focused acquisition had failed because of cultural issues within the targeted firm, the CEOs in the alliance quickly greenlit the creation of three new types of responsible data and AI diligence for M&A, which we launched in 2022.

Two of the critical categories of “new diligence” were:

Data Diligence. These are questions for an acquiring company to assess how data is sourced, used and responsibly governed in order to understand its true value and utility for the acquirer. It inquires into data quality, data bias, and data consent and rights, including third-party usage rights.

Algorithmic Diligence. These are questions for an acquiring company to assess how an AI model is designed, deployed and monitored, helping ensure the model performs as claimed and minimizing unintended consequences.

However, even among finance and legal experts, it quickly became clear that the most important category was Responsible Culture Diligence. This helps an acquirer assess a target’s mindset around data and AI, as well as identify the mechanisms in place to sustain a culture of responsibility and rigor. This is especially critical for acquisitions in which talent is the primary value driver, but it applies across all dimensions of M&A for data and AI.

The critical new reality is that AI and machine learning (ML) models are fundamentally different from previous information technologies. What comes out of a model—such as a large language model like the one that powers ChatGPT—depends on the datasets on which it is trained. Do the people doing that training ensure representative data? Do they take the time to check the provenance of the data (origin, associated rights, recency, etc.)? How do they set the parameters of their model (parameters that determine how input data is transformed into outputs)? How do they test the model to check against what is called over- or under-fit?

These may seem like technical details, but they are actually cultural indicators. Diligence around responsible data-and-AI culture is a strong indicator not only of organizational maturity but also of values alignment. Therefore, it is particularly important before acquisition.

Our tool—created and refined by more than 80 experts in AI risk, AI ethics and policy, legal and compliance, data quality, and mergers and acquisitions—includes questions that assess a target’s mindset regarding data and AI and whether it has rigorous management systems to sustain a culture of responsibility. Like all management tools created by D&TA, this one is free to use.

How to Assess AI in M&A

Meanwhile, here are a few pointers for HR leaders—working with finance, legal and other departments—to assess data and AI during a merger or acquisition:

  • Probe whether the target company grounds the way it designs, deploys and monitors AI models based on core values. Have they thought seriously about who they are, why they exist and where they’re going? Their data and AI systems should reflect that level of organizational maturity about both opportunity and risk. And their values will also tell you a lot about potential integration issues.
  • Find out how the target’s teams are trained and motivated to think about data and AI responsibility. Because data and AI/ML technologies require constant monitoring and updating, strong learning cultures and responsibility incentives are excellent indicators of high-performing teams.
  • Can a target company articulate how it has handled trade-offs when building a model or using data? For example, some models need to deprioritize bias in order to increase performance. Concrete examples of how a target has handled difficult issues or trade-offs involving responsible data and AI will indicate maturity. Lack of engagement around this topic is a red flag.
  • Does the target have a culture of science? How do their data and AI teams stay apprised of (or contribute to) relevant peer-reviewed publications? Adherence to established scientific practices is a strong indicator that a team’s culture is rigorous, isn’t insular and doesn’t “move fast.”

M&A for data and AI is a lot more than traditional questions around “cultural fit.” Without a culture of responsibility, AI failures can grow in frequency and intensity. Assessing the skills, values and mindset of the people who design, deploy and manage these technologies in a company being acquired is just as important as assessing the technology itself. And for all that, human resources will be an indispensable partner.   

 

The Data & Trust Alliance is a not-for-profit consortium that brings together leading businesses and institutions—including Meta, Walmart, Nike and IBM—to learn, develop and adopt responsible data and AI practices.

Jon Iwata

Jon Iwata is the D&TA’s executive director and former chief brand officer at IBM.

Saira Jesani

Saira Jesani is the D&TA’s deputy executive director and a partner at SYPartners.

Mike Wing

Mike Wing is a D&TA fellow and former vice president of strategic communications at IBM.