People + Strategy Journal

Winter 2022

Sensing Without Surveys: How New Data Science Helps Companies Respond to Their People

It is time for HR to adopt AI-enabled natural language processing (NLP) to solve thorny business challenges. Areas ripe for NLP are post-merger integration; diversity, equity, and inclusion; employee value proposition; and digital skills among others.

By Lili Duan and Michael N. Bazigos, Accenture Strategy
Sensing Without Surveys: How New Data Science Helps Companies Respond to Their People

​Imagine that you are the Executive Vice President for HR at a leading company. Headcount is due to double to 200,000 in six months. Alpha (your company) is acquiring Beta. The vertical merger creates a bigger Alpha. It would instantly leapfrog ahead of 95% of the Fortune 500 in rank. (Based on a real transaction. For this and other vignettes in this article, identifying details were altered for anonymity.) You’re one of the executive sponsors of the integration group.

Success is imperative, but not guaranteed. Roughly 2/3 of mergers fail to achieve their business case either within the expected timeframe, budget or at all. The issue is rarely technology: It is people and culture. As an officer from legacy Beta commented,

Too often, we tell our customers something and our partners [i.e., legacy Alpha employees] tell them something different or maybe say it another way. Our organizations have different styles of communicating. So, we have spent quite some time focusing effort on getting to a common vocabulary, and working out agreements about what is important and what is not important. 

You want to understand the size and kind of investment that post-merger integration will require. You are considering a “soft due diligence.” Its goal would be to understand the current cultures. This includes collective strengths, respective contributions, opportunities and gaps.

You engage us to help, and we discuss the complexities:

  1. No new data can be collected, including surveys. Management is concerned about potential employee reaction in the delicate, critical pre-deal period. (They are right: A fundamental principle of organization development and change is that visible data collection of any kind is itself an intervention. Employee reactions to ‘mere’ surveying could have skewed results, if not invited focus to typical pain points during a sensitive period for the deal.)
  2. Organizational data on the companies consists of archived interviews and focus groups conducted for different purposes. It is voluminous, unstructured and disorganized. Finding themes that are clear and relevant will be arduous.
  3. Some quantitative data on employee attitudes from annual survey censuses is available. However, the companies used different survey instruments. Direct comparison is not possible. 
  4. With 140,000 total respondents, drawing valid inferences from text comments on archived surveys is ruled out.
  5. Time is short. Your presentation to the executive committee (ExCo) is due in two weeks. 

You consider rescheduling the date to buy time, using the ‘brute force’ method (rushed human analysis by a small army), or cherry-picking data sources and hoping the best. You ultimately reject these options.

What will you do?

Results

You meet the deadline using new methods and techniques. Analysis discovers significant cultural overlap areas between organizations. You start to craft a plan anchored on congruent areas. 

There are also significant differences. For example, net sentiment on work-life balance is the most negative topic for Beta. But at Alpha, that balance is top-rated. This makes sense. Beta competes by relentlessly squeezing the workforce for operational efficiencies. It manages to a short-to-medium-term time horizon. Alpha believes in achieving success by developing its people. Continuous professional development and manager feedback in the moment are evident. Alpha manages to a medium-to-long-term time horizon. 

This insight alone suggests areas for strategic intervention and investment. Together, we find many more.

The ExCo meeting is a success. Results are recognized as authentic, validated and accepted. Recommendations for focus areas, targets, strategy, budget, resources and timeline are staged in 90-day increments. The ExCo approves them with only minor modifications. 

Months later, the merger goes live. New Alpha’s share price rises 32% within its first 90 days of operation. (This was not a Day One ‘pop’ as the merger news had been priced in months earlier.) Then global markets plunge in early 2020 on pandemic concerns. New Alpha is hit hard. But it regains its footing. Its share price nearly doubles from its nadir within the next 60 days. It achieves a record price point for either legacy company while outperforming the major stock indices for that period. (It’s clear that a causal connection cannot be attributed [and isn’t here] to recommendations alone. But it’s also likely that the sorts of disconnects described by the Beta executive, if unaddressed, could have plausibly increased the ‘friction coefficient’ to BigCo’s performance arc, tamping down the actual results achieved.) 

451 Feature Accenture FIG 1.png

Discussion

Traditional methods could not address this situation’s multiple constraints. A workaround was required. Enter natural language processing (NLP), and with it, social listening. NLP consists of a set of software-based procedures to ingest large volumes of unstructured free text in their intended context (the ‘corpus’ in data-speak) to derive useful meaning. The text can be captured from any source. See Figure 1. Social listening refers the process of understanding the online conversation about a company or brand, as well as its products and services. (This is different than social monitoring. Typically monitored: number of brand mentions, relevant hashtags, competitor mentions, industry trends or keywords.)

We created an outside-in assessment of Alpha’s and Beta’s culture by mining employee comments about the company in the public domain, e.g., social media platforms and industry discussion boards. We aggregated, ingested and fed web-scraped postings into NLP algorithms. We did the same with other public sources of text data: management writing, e.g., annual reports; announcements; analyst reports; and legacy company mentions in the press. 

We supplemented outside-in with internal documents. These were similarly processed: free-text survey comments; employee relations reports; HR policy documents; and notes from prior interviews and focus groups.

A pre-trained software capability identified language that was cultural in nature. Identified comments were then mapped against categories of a validated model of organizational culture. (The model has eight categories, not shown.) The mapping enabled an apples-to-apples comparison of the legacy organizations. Data from their different survey instruments was similarly mapped onto our single, standardizing construct.

Our first analytic pass extracted leading discussion themes within each culture category (e.g., leadership). The second analytic pass extracted topics within each theme (e.g., managers’ distrust of employees). 

Following culling and sorting, the net sentiment of the language within each dimension was calculated and reported as a net sentiment score based on the net relative difference between comment scores for negativity or positivity. Ultimately, these were tied back to the business case behind the acquisition to place the investment recommendation in its proper context. 

The Flexibility of NLP

NLP increases the meaningfulness of the textual analysis by analyzing words in their intended context. At New Alpha, we analyzed text through the lens of organizational culture. In contrast, word clouds are still accessible and popular with some. But merely counting words doesn’t provide great context. For example, knowing that “leadership” is a highly cited term among survey comments doesn’t tell us much. It would be good to know if “leadership rocks” (strong positive score) or “leadership stinks” (strong negative score), 

Because NLP enables the intelligent mining of rich data at scale, other business functions have been using it for years. HR has not. Marketing insights focus typically on customers, product, pricing and brand. These yield a richer understanding of what’s going in markets than counting methods like word clouds or social monitoring.

Tales of Success

Maybe marketing was first onboard with NLP. But NLP can be applied to any domain. Now, it’s HR’s turn.

Tailoring to HR’s interests occurs in the machine learning (ML) process. This requires a large corpus of training data (free text); a valid, domain-specific taxonomy (like our culture model); and customized training algorithms. Training consists of helping the machine learn when it classifies test data correctly and when it doesn’t (supervised learning). The machine can also create its own categories based on recognizing and grouping similar or neighboring word clusters (unsupervised learning). As a result, the machine can “understand” the statement, “My manager shares cookies with me during lunch breaks,” in the context of organization culture. It is categorized as “Trusting Leadership” and “Positive Sentiment.”

Other enterprising researchers are applying NLP to organizational constructs like culture. It has also been applied to performance management, skill assessment, and even recruiting and selection. We next present three of our most popular client use cases in strategic HR management to illustrate the power of the possible employee perceptions of climate for:

  1. Diversity, equity and inclusion (DE&I); 
  2. Digital capabilities and behaviors; and 
  3. Perceptions of employee value propositions (EVP).


Outside-In DEI. An historic level of societal attention focuses today on what fully belonging means. The corporate sector has responded with significant involvement in this issue. Demand for assistance in diagnosis, intervention and visible results in the DEI domain has never been higher in either author’s experience.

NLP techniques can address at least two critical and recurring questions in an outside-in, no-touch way: What are our employees’ perceptions of their451 Feature Accenture Fig2_blu.png workplace’s DE&I? and How do perceptions of our DE&I climate compare to our competitors? Table 1 and Figure 2 demonstrate the kind of information and insight that can be generated with these approaches. (Note that equity hasn’t been broken out separately because it is inherent to both diversity and inclusion.)

Human considerations in DEI. Two common pitfalls that we have seen in the market include potential AI bias and a representation issue. (We are highly committed to fairness, validity and the human element in our work.)

First, the whole enterprise is our typical unit of analysis. At this scale, some imprecision may not change the results very much. It is a concern, though, if bias that paints an unfair picture of an organization leaks in. This is especially true if an organization is comparing itself to others. 

To avoid bias, machines must recognize—and correctly score—certain locutions. Some may represent code-switching or vernacular language that appears in the informal writing of employees on social media posts, discussion platforms and other free-form text.

For example, many AI readings of the word ‘bad-ass’ or ‘badass’ will score it as negative to highly negative in sentiment. However, in current usage, this is often high praise applied to leaders, e.g., “RBG was such a badass!” The substitution of “bad” for “good” is only one example of positive and negative terms trading places. “Wicked” would be another example. 

451 Feature Accenture Table 1.pngThe extent could well differ between demographic groups. A company that is more diverse than another could lead to differences in switch rates of “bad” for “good.” Surprisingly, that could work against the diversity score of the more diverse company. The outcome would depend on the specific sentiment lexicon selected. (There are many available; the machine uses them to score terms on direction and intensity.) 

Second, people with a diverse range of individuals must be included in decision-making. The DE&I community puts it less formally: “Nothing about us without us.” The core question: How diverse is the team ‘behind the curtain’? As an example, we (the authors) are neither Black, Hispanic nor Indigenous. We recognize that we do not have the lived experience of those peoples, irrespective of our knowledge, readings, research findings, publications, memberships, friends and colleagues. And we never can. 

We recruited members from our organization’s DE&I team who had the benefit of that lived experience to assist us. They helped mold the architecture of our analytic capabilities in this space. Today, we actively partner with colleague Managing Directors who are Black and Hispanic to engage with and deliver this capability to our clients together. 

Digital Dexterity. This is a validated assessment of people’s digital skills and behaviors to leverage existing and emerging technologies for better business outcomes. Our model consists of two components. Digital capabilities include skills in AI solutions, automated solutions, blockchain, cloud-based solutions, design thinking, Internet of Things (IoT), security best practice, and statistics and analytics. Digital behaviors include authority, data driven, ethical orientation, fluidity, growth mindset and individual agility.

The outside-in assessment in Figure 3 used external data to evaluate current readiness, estimated competency trends compared to its peers and identified strategic talent insights. Talent profiles data (exhibited skills) were gathered by scraping publicly available information from the web, third-party resume databases, job boards, the recruiting industry and various consumer/identity databases. Job postings data (desired skills) is gathered by scraping over 100,000 websites, including company career sites, national and local job boards, and job posting aggregators. The skill emphasis within organizations is evaluated as the percentage of profiles/postings with that skill.


451 Feature Accenture Fig 3.png

451 Feature Accenture Fig32.png
Outside-In EVP. What’s in it for employees to work in your company? We have seen wide-ranging EVPs. Many companies are quick to advertise their self-reported EVPs. However, employees may have different perceptions. NLP can help answer the question: How does your EVP land? It senses those perceptions. 

For uniformity and expedience, we trained an NLP capability to answer this question through the lens of a simplified, five-component EVP model. The components are familiar: people, organization, work, opportunity and rewards. We unleashed our capability on data in the public sphere from hundreds of companies. Because EVPs are competitive differentiators in the war for talent, analyses carry the most meaning when presented together with competitors. Figure 4 shows how Company T, a major high-tech enterprise, stacks up against six other competitors, and how they compare to each other.  


Other uses of NLP. There is no limit to the questions that can be posed. The main requirement is the creation of meaningful questions. Answers they generate should create tangible or intangible value. Examples of questions where the authors used NLP have included:

  • What is the range of experiences our employees have day to day?
  • Is our new CEO’s impact being felt on the ground?
  • How can we refocus our people towards work that only humans can do?
  • How can we rapidly reskill our people into higher-value, higher-satisfaction jobs? 
  • Which success archetypes is our current culture closest to? 
  • Which tasks can be performed remotely (vs. physically at a worksite)?
  • Which tasks can be performed remotely? Which require physical presence?


Advantages and Limitations of NLP

No claim of a panacea is made in this article. NLP offers numerous advantages. We have highlighted many. But there are inevitable trade-offs. HR leaders should be familiar with both for maximum impact.

Advantages. 

  • Addresses respondent hesitancy because compared to surveys, documents in the public domain are readily available.
  • Public data are as anonymous or identifiable as the respondent wants them to be. (This assumes that data are ethically collected and treated.)
  • Shorter time-to-result cycle and less resource intensity than traditional enterprise surveys.
  • Textual analysis done well is superior to human content coding.
  • New methods to adjust for impression management like truth scales in surveys are in use and getting better.
  • ML for NLP is being continuously improved upon, and the training datasets are becoming fairer as awareness of responsible AI grows. 451 Feature Accenture Figure 4.png

Limitations. 

  • Self-selection in voluntary postings may introduce error, i.e., negativity bias. This is generally distributed across organizations. When it is, it cancels itself out. 
  • Knowledge of others’ responses may introduce error. 
  • Impression management is evident in management publications (e.g., annual reports, press releases, etc.). But differences become apparent when comparing organizations especially qualitatively. (Thematic content can vary widely.)
  • Results from data culled from anonymous external sources (e.g., social media platforms) typically cannot be cut by unit, geography and often level. 
  • Comments collected from open-source domains are were neither intended for analysis nor expressly permitted. This raises ethical issues for some. Others assume that posting publicly signals implicit permission for any use.
  • Management in some organizations may want to collect personally identifiable information surreptitiously. (This is not ethical, even if it’s legal. It  becomes problematic when discovered.)
  • There are no standards about what good looks like. The waters are choppy out there, and results are highly variable. 

Caveat Emptor

The last bullet above bears repeating. A comparison of sentiment scoring accuracy was recently conducted by the authors. Ten commercial NLP providers were provided the same data corpus. They were instructed to do their best. All knew it was a competition. (They did not know who their competition was.) Results were compared to human analysis. Results ranged from completely random to directionally accurate (i.e., the NLP analysis corresponded to the humans’). ‘Convergent validity’ is only one test, of course, but it’s a useful screening standard in purchasing decisions.

The Future is Yours

Daunting though the limitations may seem, it’s worth keeping three themes in mind. 

First, applying NLP to organizational issues is in its infancy. Yet, even imperfect technology can add robust value today. 

Second, what is possible gets better every day. Much is being learned. The pace of new discovery is accelerating. So is the nature, speed and power of computing. Methods in this article will likely become obsolete before long.

Third, HR has been late to the party, but it can catch up fast. We recognize that not everyone is a data scientist, and not everyone yearns to perform NLP analysis. Good news: Not everyone needs to. The business leader who is also people-savvy can play the role of value architect. The proviso is that they need to build a fit-for-purpose team to challenge with answering organizational questions, which, if answered well, unlock significant value. The team would ideally include skillsets in data analysis, architecture and management, software development, and the human skills of consulting, collaborating, persuading, presenting, leading, complex problem-solving, and socio-emotional skills like empathy and kindness.

To win the future, HR leaders must develop a sense of what is now possible, remain updated and structure organizationally strategic questions that cut to value. They must become familiar enough with NLP to have meaningful discussions with external service providers and/or internal people analytics functions. Doing so will not only ensure HR’s business relevance, it will help them lead their organization into a future of accelerated success through people who thrive.  

The authors wish to acknowledge the helpful comments on the DE&I portion of this article by Carol Watson, Chief Inclusion Officer at BCW Global, and Dr. Amy Gómez, SVP, Diversity Strategy, Klick Health. The perspectives and opinions in this article do not necessarily represent the views of Accenture or Columbia University.


Lili Duan, Ph.D., is a Managing Director (Partner) at Accenture with global responsibility for Organizational Science within the Organizational Analytics practice. 

Michael N. Bazigos, Ph.D., is a Managing Director (Partner) at Accenture with global responsibility for the Organizational Analytics practice and has taught in the graduate program in Social-Organizational Psychology at Teachers College, Columbia University. He can be reached at mbazigos@gmail.com.

References

Bazigos, M. N. (2019). Putting trust to work. Keynote address at the Annual CHRO Summit of HRPS, Las Vega, NV (June). Retrieved on 12/2/2021 from this URL: https://www.slideshare.net/mbazigos/putting-trust-to-work-chro-summit-bazigos

Campion, E. D., & Campion, M. A. (2020). Using Computer-assisted Text Analysis (CATA) to Inform Employment Decisions: Approaches, Software, and Findings. In M. R. Buckley, J. E. Baur, & J. R. B. Halbesleben (Eds.), Research in Personnel and Human Resources Management (Vol. 38, pp. 285–325). Bingley, UK: Emerald Publishing Limited.

Clarabridge (2021). Clarabridge CX dictionary. Retrieved on 11/30/2021 from this URL: https://www.clarabridge.com/customer-experience-dictionary/social-listening

Pandey, S., & Pandey, S. K. (2019). Applying Natural Language Processing Capabilities in Computerized Textual Analysis to Measure Organizational Culture. Organizational Research Methods, 22(3), 765–797. https://doi.org/10.1177/1094428117745648