As part of my ongoing research into the determinants of success and failure in artificial intelligence implementations, one consistent finding has emerged: AI is as much a social, cultural, and behavioral shift as it is a technological one. This raises two important questions: What does it mean to be human in the age of AI, and how can organizations leverage this understanding?
Lee Rainie leads the Imagining the Digital Future Center at Elon University in North Carolina, which has been studying AI-related social issues for over a decade. The center’s work has focused on understanding how emerging technologies impact human capabilities and social structures, with particular attention to workplace implications.
The center recently conducted research identifying 12 essential human traits and examined how AI might enhance or diminish these traits over the next decade. The study surveyed experts about how each trait might be affected, creating a framework for understanding AI’s impact on our core human capacities.
Key findings revealed that not all human traits are equally vulnerable to AI’s influence. While some traits such as creativity, curiosity, and certain aspects of decision-making showed potential for enhancement, others including social and emotional intelligence, sense of agency, and trust in shared values may face significant challenges. This complex picture suggests organizations need nuanced strategies for AI integration that strengthen rather than diminish human potential.
Curious to further understand these findings and their implications for organizations, I reached out to Rainie for his insights.
My Q&A with Lee Rainie:
Why ask what it means to be human in the age of AI, and what makes that the right focus?
LR: We’ve been studying AI-related social issues for about 10 years now. When we first did work about AI and jobs a decade ago, the standard answer from smart people anticipating the future was that “soft skills” — the special human skills — would survive the onslaught. This included empathy, social intelligence, complex reasoning, and decision-making. But nobody has ever really tested that directly.
So, we picked 12 traits that seem especially human:
- Social and emotional intelligence.
- The capacity to think deeply about complex concepts.
- Trust in widely shared values.
- Confidence in our native abilities.
- Empathy and moral judgment.
- Mental well-being.
- Sense of agency.
- Identity and purpose.
- Metacognition (thinking about thinking).
- Curiosity and the capacity to learn.
- Decision-making and problem-solving.
- Innovative thinking and creativity.
What stood out most from the experts’ predictions about these traits?
LR: Experts sort of confounded my expectations. I thought creativity was going to be on the table as something that might be negatively affected by AI. But creativity showed up as one of the things experts were most positive about.
Metacognition — the process of thinking about one’s own thinking and recognizing blind spots in one’s knowledge — was harder to predict. Experts were slightly more inclined to say it would be negatively impacted rather than positively impacted.
The traits most negatively affected were in a bucket of internal human calculations. They were not confident about social and emotional intelligence — our ability to understand others’ emotions, respond appropriately, and navigate social interactions effectively. Experts feared people would become demoralized when they see AI becoming more intelligent, showing more human-like traits, and being used more in workplaces and social settings.
Agency — our sense that we can act in the world and make meaningful choices — was another concern. Experts were more negative than positive about the impact of AI on people’s sense of agency. They feared humans would just defer to increasingly smart AI systems.
For the traits that might decline, what risks does this raise for how we lead, work, and connect?
LR: The best learning — the most comprehensive and lasting learning — comes from the mess, from the muddiness of it, from the struggle of it, from the trial-and-error dimensions of it.
I think HR professionals especially would be wise to encourage studies about how to teach well and how to inculcate these human capacities that, if they’re nurtured, are just beautiful and speak so nicely to human generosity and the human spirit.
Organizations should put “trap doors” in job interviews, onboarding processes, and later in the process, just to see how well people are doing with their learning mindset. Do they battle through things? Do they ask for help when they need it?
What makes us better employees are the things that HR folks are going to want to structure in place. And sometimes that will mean not necessarily paving the way through a quick outcome, but making people stop and think about it, making people pursue different pathways.
Some traitssuch as curiosity and creativity, and certain aspects of decision-making, showed potential growth. What’s behind these hopeful forecasts?
LR: For curiosity, I find myself exploring more subjects now, just because it’s not hard to get started. It got easier to do in the age of powerful search engines, and it’s orders of magnitude easier to do now.
With creativity, it’s more at the simpler level of just getting your juices flowing. AI makes that a lot easier. If you think about your own process of ideation, it’s messy at first. You don’t even know what to ask if something sparks in your head. AI makes that process a lot easier.
Decision-making is this preeminent skill of leaders. In effect, these experts gave us an answer in highlighting creativity, curiosity, and decision-making as the things that might make things better for the nine traits that didn’t come out so well in our survey. The best kind of leadership — the most forward-looking, the most inspiring kind of leadership — might overcome some of the problems we surfaced in this survey.
How could a leader design an AI-integrated workplace that strengthens human capacities?
LR: Figuring out what you’ll do — and what you expect or want the tools to do — is sort of Job 1. It’s going to be different even by sub-unit of organization, in some cases, depending on the subcultures of organizations.
A little window into this world is occurring in academia now. There are lots of conversations almost hourly on Elon’s campus around assignments. Can I use AI here? How much can I use? What do I attribute to? Where do I need to stop? What’s a legitimate use case and what’s cheating? That same kind of sensibility probably is a wise thing to bring to these conversations.
Everybody learns differently. Everybody responds differently to stimuli and to discouragement or struggles along the way. Being discerning about that falls under a large bucket called literacy — figuring out how you’re going to work with these things and how to master them, how to be ethical about them, how to attribute in the right way.
When in doubt, defaulting to the human is probably the wisest thing for purposes of control and for purposes of trust.
How does your framework for culture intersect with employees’ knowledge of how work actually gets done?
LR: The best way to implement change is to start with employees who are already excited about it. Let those people run a pilot and give them complete autonomy.
When there’s a problem or a roadblock or an unexpected hiccup, they’re going to be like, “Oh, let’s jump on this. How can we problem solve?” They’re all excited because they’re learning about their passion project.
Then, you let their results speak for themselves. You tell the story about their success to the other people in the organization. You let them tell the story. You tell the story for them, and people will look at the success they’ve had and say, “Wow, they were able to get that done. I want some of that.”
Now it’s a “pull” strategy of change management instead of a “push” strategy of change. Sometimes, the fastest way to change a big organization is to start really small and let the results speak for themselves.
One of the workplace trends for 2025 is CEOs deciding how far AI should go. Can you elaborate?
LR: As a CEO, you understand your company’s adaptability muscle. So, either you have a company that embraces change and is adaptable or you have one that’s resistant.
Figuring out the balance of how much can we invest in AI, how quickly should we invest in AI, how quickly will the team come along for the ride, is a key decision. At one extreme, you have the longshoreman union, which is striking to make sure there’s no automation. At the other extreme is the startup with founders who want to be the future of AI innovation.
Where are you on that spectrum, and what will make the most sense from a strategic standpoint? One way to do that is: You shift your strategy and then bring your culture along for the ride. The other is: You choose a strategy that aligns with your culture. Both work! As the CEO, you can make a choice that strategy is not going to work for this culture at that time. Sometimes, the right strategic choice is to pay attention to what your culture can absorb.
What advice would you give HR leaders who are just beginning their AI journey?
LR: Resisting is not an option. There will be problems to worry about and care to be exercised, but in some respects, the comforting message is: You don’t have to figure it out. HR leaders: If you work with your teams and use a sort of co-learning approach — “you teach me, I’ll teach you” — we’ll do our best to surface best practices. We’ll do our best to warn you of the dumb things that people do. This is an amazing growth opportunity for everybody.
It’s really a Cambrian moment with new forms of intelligence and learning being born. You can see that on campuses like mine. Faculty and students are experimenting all over the place and teaching each other what works with AI and what doesn’t. For instance, I know professors who are giving assignments to their students like, “You will get extra credit and a better grade if you teach me something about these tools.”
That co-learning mindset is a great way to approach this by HR professionals, too.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.