As a SHRM Member®, you’ll pave the path of your success with invaluable resources, world-class educational opportunities and premier events.
Build capability, credibility, and confidence to influence strategy, shape culture, and drive measurable business impact.
Demonstrate your ability to apply HR principles to real-life situations.
Stand out from among your HR peers with the skills obtained from a SHRM Seminar.
Demonstrate targeted competence and enhance your HR credibility.
Designed and delivered by HR experts to empower you with the knowledge and tools you need to drive lasting change in the workplace.
Gain a deeper understanding and develop critical skills.
Demonstrate your ability to apply HR principles to real-life situations.
Attend a SHRM state event to network with other HR professionals and learn more about the future of work.
Stand out from among your HR peers with the skills obtained from a SHRM Seminar.
Learn live and on demand. Earn PDCs and gain immediate insights into the latest HR trends.
Stay up to date with news and leverage our vast library of resources.
Designed and delivered by HR experts to empower you with the knowledge and tools you need to drive lasting change in the workplace.
Easily find a local professional or student chapter in your area.
Post polls, get crowdsourced answers to your questions and network with other HR professionals online.
Learn about SHRM's five regional councils and the Membership Advisory Council (MAC).
Learn about volunteer opportunities with SHRM.
Shop for HR certifications, credentials, learning, events, merchandise and more.
Why do 80% of AI initiatives fail? It’s not the tech — it’s the organization. Melissa Reeve, founder of HyperadaptiveSolutions, shares how businesses can avoid common pitfalls, leverage middle managers, and embrace systemic change.
At The AI + HI Project 2026, you won't just hear about AI, you'll use it. From hands-on demonstrations to peer-driven innovation labs, every part of your experience is infused with AI to elevate your learning, your network, and your impact.
Master the intersection of artificial intelligence and human intelligence to lead innovation and equip yourself with practical, ethical, and strategic tools to implement AI solutions with confidence.
The podcast is just the beginning. The weekly AI+HI Project newsletter features articles on AI trends that are redefining the future of work. Explore these must-read insights from the latest issue. Subscribe now to start turning AI+HI into maximum ROI.
SHRM's AI+HI executive in residence summarizes how AI is producing a widening gap between technical capability and human adoption.
Unlock better hiring for midlevel leaders with AI. Learn practical steps to improve outcomes, reduce mishires, and build a stronger leadership bench.
Simplify employee development planning with this AI prompt template. This guide offers examples and actionable insights for managing employees’ career growth.
HR is in the midst of an AI tech surge. Learn how to prioritize tools, manage change, and build resilient systems for the future.
Melissa Reeve creator of the Hyperadaptive Model and author of Hyperdaptive: Re-wiring the Enterprise to Become AI-Native. Hyperadaptive brings together process excellence, systems thinking, and the human side of AI integration to help leaders reimagine how their organizations learn and adapt. Prior to leaning into AI, Melissa spent 25 years as an executive and Agile thought leader, which led to pioneering work in Agile marketing and her role as the first VP of Marketing at Scaled Agile, which helps enterprises adopt Agile and Lean at scale. She lives in Boulder, CO, with her husband, dogs, and chickens, where she enjoys hiking and gardening.
This transcript has been generated by AI and may contain slight discrepancies from the audio or video recording.
Nichol: According to our guest today, more than 80% of AI initiatives fail, not because the tech is broken, but because organizations misdiagnose the real problem. Too many leaders chase new tools, missing the deeper issues in culture and structure that determine success. Joining me is someone leading the charge to reframe this conversation.
Melissa Reeve, author of the upcoming book, Hyperadaptive: Re-wiring the Enterprise to Become AI-Native. She believes that the real work of AI transformation begins with people, not software. By the episode's end, she'll reveal the most common mistakes holding businesses back, share real stories of companies that have cracked the code, and walk us through her hyper-adaptive model, a five-stage roadmap that places middle managers at the heart of change.
Melissa, welcome to The AI+HI Project.
Melissa: Nicole, it's such a pleasure to be here. Thank you for the opportunity.
Nichol: I'm so glad you're here because the timing is perfect. Enterprises are really understanding now that it is people. So I want to start by digging into one of the statistics that you shared with us, that 80% of AI initiatives fail. Where does that data come from? It's a pretty sobering number.
Melissa: It sure is. This is the RAND Corporation study, and it actually came out about a year ago, so I don't know what the statistic is today. Following that study, there was an MIT study from their NANDA organization that upped that number to 95%. So whatever the number is, we know it's a big number of these AI initiatives that are failing.
Nichol: Yeah, and the RAND data is really solid. It's not just on generative AI, but it's on prediction and machine learning. It's a pretty consistent number that they've been tracking over time, a large study size. So yeah, it's a real sobering number. Why do you believe that so many business leaders are getting the diagnosis wrong, like focusing on technology instead of their organization? What's happening there?
Melissa: Well, I think it's partially about what's in the news. When we look at the headlines, it's all about the tools. It's about what AI models have been released, what new shiny tools have been released, the impact. But when you think about AI, there's a lot of learning to digest. There's not only the tools but the impact of the tools, and we're all figuring this out together.
I like to think about when Microsoft Word was released. We did have to focus for a while on discovering what this new tool could do, and then we figured out how it changed our processes and our workflows. In that instance, the workflows that require the people and the culture change didn't change that much.
With AI, I think we're going to experience a radical rewiring of those workflows and the overall work experience.
Nichol: Your work moves the conversation way beyond "What tools should I buy?" and it's really rich with this conversation around workflows. For the executive feeling immense pressure right now to show AI progress, what is the single most important mind shift that they need to make today?
Melissa: It's really viewing AI as a systemic change. That means we're talking about the people, the process, how we communicate about things. We really need to be thinking about the entire organization. In fact, in the book, I talk about our current organizational structure as linear. By linear, I mean strategy to execution, all the handoffs and delays, concept to delivery, all of the handoffs and delays that happen there. We're going to be rewiring all of that into an organizational structure that is more suited for the age of AI.
I think the second thing that executives really need to think about is when they talk about jobs. I hear executives thinking and talking about them as monoliths—"jobs are going away." When I think of jobs, I think of them as a collection of tasks, processes, decisions, and human interactions. When we think about what AI will rewire, it will be pieces and parts of jobs. So we really need to be thinking about how do we deconstruct jobs and then reconstruct them in new ways.
We've got precedents here. We know how this looks through things like factory automation. That was a rewiring of jobs. In the technical sphere, there's something called DevOps, where we took the software delivery pipeline and we automated that. We know that not all the jobs went away, but they did radically rewire as automation took hold. And those are the two shifts that I think executives really need to start focusing on.
Nichol: I want to just restate that because I think the way that you captured it is really great. We have a thing that we call a "job," and that job is a collection of tasks, but we talk about the job as if it's monolithic. Where we're going right now is with the way that AI will change and needs to change workflows, we will be deconstructing those jobs into tasks and then rebuilding them back up. This is in order to suit a different type of workflow that is no longer linear and maybe also a little hierarchical, to be what it's going to transform into. Is that correct?
Melissa: That's correct. I mean, you think about everything encapsulated in what you just said and you realize, "Oh, I get it. It's more than just a tool."
Nichol: Yeah, it's one of the things that I often say to people: it's not a technological change. It isn't a Windows 95 upgrade. It is a social, cultural, behavioral change. It's people, processes, and ways of doing things. The organizations that don't understand that are wondering why they're not getting a big ROI bump from this very expensive change.
Because it's very expensive in terms of not only the licenses and the training but also getting the data in place and the infrastructure to even just be in the game. That's very expensive. In terms of people and focus, it's an expensive change. And if people don't redesign the jobs, if they don't redesign the workflows, then they don't see the bump and they wonder why.
Okay, well, alright, so let's dig a little deeper into what's holding organizations back. Many organizations assume that technology alone will solve their problems, but your research points to a deeper structural issue that needs to be addressed for AI initiatives to succeed. Could you walk through some of the structural mistakes that cause these AI projects to fail? Just one or two of the most critical organizational flaws that you see time and again.
Melissa: Yeah, if I had to think about one thing that organizations could do better, it is provide the support needed to effectively roll out AI, and that support is very complex. It involves a communications plan. It involves identifying your AI North Star. What is your AI philosophy? What are you trying to do with AI? Are there business goals that you are trying to achieve?
I like to point to Moderna. They set the business goal of releasing 15 new drugs in five years with the help of AI. Now, that is an impressive North Star. Typically, it takes 10 years to release one drug. That's supported by things that I've identified in the model, which we'll talk about more later. But they include AI champions—identifying your AI champions.
AI learning is social learning. Sure, there are static parts of AI learning. So the governance, how AI works generally, doesn't change, but the use cases are so immense and so complex. We typically learn from each other. So I know there are leading organizations out there, and Moderna is one, who have identified their champions.
Do you have a program to support those champions? Is it systemic? There are things like I call AI activation hubs that then support pairing local people with experts, and they also collect those best practices, the success patterns. Your dynamic AI councils, there's so much support that needs to be wrapped around this massive change that we're talking about. And I think you alluded to it in the beginning of the podcast where you said organizations are just starting to tune into this realization that, "Oh, AI is much bigger than the tools."
Nichol: Melissa, what you said, we also see in our research. One of the great things about being at SHRM is that we have a fantastic team of economists and researchers, and we are surveying workers and workforce monthly to see what they're seeing. We recently surveyed nearly 2,000 HR professionals on what they believe the risks to be when AI implementations fail. And more than half of them identified a moderate or high risk in three areas: organizational operations, reputation, and competitiveness.
So what would you say are the hidden costs to a company when a major AI initiative doesn't deliver on its promise?
Melissa: I think you're absolutely right in identifying things like organizational operations, reputation, and competitiveness. I think that this moment is much like the moment of digital disruption, where we saw the Ubers and the Airbnbs really disrupt the giants, and I think that there will be giants that fall because of their inability to activate AI within their organization. The goal is for them to start to integrate more of the AI into their organizations. I see that there's another hidden cost, which is what I call "random acts of AI."
Nichol: That's great. That's great. Okay. Keep going. I'm just thinking about all of the random acts of AI. That's fantastic. Okay. Sorry. Keep going.
Melissa: And I have so much empathy for the leadership right now because there's pressure from the board saying, "Go do something with AI." And then the leadership often just passes that along and says, "Go do something with AI." And so everybody's just trying to go do something with AI. But it's these random acts of AI that are really costing organizations because they're not seeing the ROI and they're left scratching their heads saying, "I know this AI thing is supposed to 10x my productivity."
In fact, I just came back from a conference where somebody from Silicon Valley said the top AI-native organizations are getting 23x revenue per employee than a traditional organization. So the pressure is immense, but we need to get much more organized and much more structured around our AI efforts.
Nichol: Absolutely. My metaphor for that is without the structure, it's like putting lard on a toddler. It's just slippery and it just sort of goes off and you can't get ahold of it again. It's really hard. I want to circle back to something that you said earlier too. One of the class acts in the way that they're just execute, execute, execute is that Moderna example. It's amazing the way that Tracy Franklin and her organization are taking AI, moving it across the organization, and with that North Star of, "We are going to do this."
Pre-pandemic, because one of the things, pre-pandemic, Moderna was not yet J&J, but they did that first because they had brought AI into their drug development pipeline before they were able to bring vaccines to the market at a speed that no one believed you could make vaccines that quickly.
And then, with that sort of DNA around their product, they said, "Well, let's pull this into the organization." So they began their journey with that same level of belief and conviction and clarity. That North Star is so clear, and I love the way that you characterize that.
Melissa: Yeah, thanks for that. And I agree with you, Moderna is a shining example. A couple of other ways they supported their organization is they had a prompting contest and they identified their top 100 champions. Concurrently with that, they developed an online community, and that community had over 2,000 active users every week. You think about the social learning that's going on between those two efforts and that just radiates out into the organization.
Nichol: Absolutely. And one of the other things too is that the organization watches to see what the leadership in the organization does with people who use AI. We had an entire episode around what we call "AI penalties" and the penalties that are happening in the organization. If, when someone improves a workflow that reduces their current workload, they're given more of the exact same previous workload instead of transforming that into something that drives the business or improves the way customers are served, or something that is really growing the role, then the organization watches and they see that.
Or if someone transforms their workflow and then they're let go, then no one else will transform their workflow because they know what that means. You really hit on a very important thing that I think is a big blind spot that sometimes organizations have. Organizations are collections of human beings and they are watching what happens. So being able to access through the peer champions that you mentioned, the AI champions, being able to access that peer-to-peer leverage is really something that makes these transformations work, but also go better.
Melissa: That's right.
Nichol: Yeah. Did you know that SHRM offers a hands-on event to further your AI learning? The AI+HI project is not just a podcast, it is not just a newsletter, and the event isn't just another conference. SHRM is invested in ensuring HR leaders and business leaders understand where AI is headed and how it can be leveraged right now and into the future for their organizations.
The ecosystem that is our AI+HI project offering is built to deliver the best and latest info on the intersection of AI and human intelligence. If you're watching on YouTube, just click the link above to register or find the link in this episode's description to join us in San Francisco, March 9th through 10th. Okay.
So that brings us to the heart of making change happen: real-world stories. Let's shift to some examples of businesses that have faced these challenges head-on and what can we learn from their journeys. Can you share some additional success stories that you've seen in companies navigating this transformation and what did they do differently at the organizational level that set them up for success?
Melissa: Sure, and the book is really just a collection of these stories. That's what I aimed to do. I spent 18 months researching and surfacing these success patterns in order to inform the model. One company I'd like to highlight is Unilever, and I love this example because Unilever is a soap company, and we think, "What is a soap company doing with AI?" And what they're doing with AI, I'm almost getting goosebumps because they're integrating it at so many levels.
One example is their supply chain in Mexico. They've worked with Walmart in Mexico to inject AI throughout the supply chain so that they actually know when somebody picks up a shampoo bottle off the shelf in Mexico and that UPC code is scanned at checkout. That reverberates through their entire supply chain so that they know that that particular Walmart is down one bottle of shampoo. That level of granularity, combined with AI informing everything like weather so that they can restock if there's a streak of hot, sunny weather and they know sunscreen is going to sell out, is phenomenal.
They're using it in digital twins, so they will actually use AI to create the new formulas in computers, like in their AI digital twin, before they actually create it in the lab. To me, that just blows my mind. Because they're able to do that, they also create digital twins of... they did this in Thailand. They created a digital twin of a shampoo. They actually marketed the digital twin to see how it would perform before they even went into production. You think about how that's transforming the entire business, and it's really quite incredible.
They put their people first, so they just wrapped up a program that they called "Future Fit." What "Future Fit" said is that they've hired exceptional people and they understand that AI is going to dramatically change their roles, and so they are going to invest in their training and their upskilling so that they can be future-fit for what's to come. I just think that that people, human-centric mindset is so powerful in terms of giving people security to activate AI, that it's something that almost any company should take a look at.
Nichol: Mm-hmm. Mm-hmm. Absolutely. Wow. Well, I really want to get into the model now. I'm so excited to hear more. So transitioning from stories to strategies, we know it's not enough to have isolated success. Companies need a framework to drive systemic, repeatable change. Now to turn our attention to the solutions, your hyper-adaptive model promises a new playbook for leaders, one that addresses recurring pitfalls. How does it fundamentally differ from top-down change management, especially in how it activates middle managers?
Melissa: Sure. So Harvard research has shown that top-down change is often ineffective because it doesn't recognize the reality of frontline workers. And bottom-up change often fails because it doesn't have the authority, the budget, or the influence to change the entire organization. What their research has now surfaced is something that they call "middle-out." The middle-out transformation empowers the middle management layer and says that these are the individuals that really hold the keys to change.
They're close enough to strategy to understand the North Star, like "let's release the 15 drugs in five years," and they're aware enough of the operational reality on the ground that they can help massage that, accommodate the operational realities, and adjust it accordingly. The research says—and I forget the numbers—that it's much more effective than either top-down or bottom-up.
Nichol: That is really interesting also because that middle group is also the group that is among the most worried about their jobs. It's really interesting to hear that they're essential for the change and then also knowing that they're the most worried as well, because they're in the middle. It's also one of the things that's true for a lot of HR leaders as well. You know, they are in the rooms required for these transformations. You can't actually get it done without HR, which people are starting to figure out. And many historical HR tasks are really good for AI.
Could you speak to that tension between them being essential—these middle managers from the Harvard research—and them being also potentially in the crosshairs?
Melissa: Yeah, so I have a phrase in the book and I call it, we've created a level of professional nagging. So when you think about middle management, I call it the professional nagging system. AI can do all that. What we need more of is alignment. It takes us weeks to figure out what exactly we're going to do. We need more scenario building. What is the impact of what we're trying to do? We need more people to support each other's growth. These are all things that our middle managers can help to rally around.
I also think that some middle managers will end up... As AI starts to do pieces and parts of jobs, people will shift from doing the task to building, monitoring, and maintaining the AI that does the task. If you think that's a one-and-done job, you're sorely mistaken. These models are not standing still. You build an automation, it breaks, the capability changes, you have to rebuild the automation. Some middle managers will move in that direction.
Some will move into more of an alignment direction. Some will move more into modeling because when we think about how we make decisions today, we shortcut the process all over the place and it's so opinion-based. Now the world has opened up and we have so much more information we can use to make effective decisions. So I really see those roles reinventing along with other roles in the organization.
Nichol: Wow. Okay. Well, I'd like to... I'm really curious about the five stages in the hyper-adaptive model and all of the pieces of them. Could you articulate for our audience each stage? What's its primary focus and what's its purpose to move the organization closer to being AI-native?
Melissa: Sure. So the premise of the book is that we've got to move large organizations iteratively and incrementally towards this AI-native state. It's not going to happen overnight. So we start with building a foundation that starts to look like your AI councils, that starts to look like doing your pilots. A lot of companies are already in this foundation-building state. I do want to highlight that AI councils, we need to inject what I call dynamic governance. And I don't think we have time to go into it, but I just want to say that I've got a whole newsletter around dynamic governance, but start thinking about governance differently.
And then as you move from building your foundation, you have to start injecting AI into your workflows. And we start to take a look at that. I believe they need to be fluent in inventing a workflow, taking a look at a workflow, reinventing it. This is going to happen over and over and over.
As AI gets more and more capable, we start to support our AI leads with the aforementioned AI activation hubs. This is like a center of excellence that houses best practices, but you have to think of it in a fractal nature. So something that turns into a network. AI activation hubs throughout the organization because AI in finance is going to look potentially very different than AI in marketing, than AI in engineering.
And we need to support that social learning throughout the organization. And there's something in here that I call the AI learning flywheel. AI is too dynamic to develop a static curriculum. I mean, there are pieces and parts of it that we talked about that you can develop a static curriculum around how it works and the guardrails, things like that.
But the use cases seem to be changing on a weekly, if not monthly basis. And so it's that social learning where I described you want to atomize the learning content, then deliver it through this network you've created to the places it matters most. Oh, look what's happened with AI video. Let's send that over into marketing and maybe HR because they're creating a lot of videos too. Oh, Claude Code just got a major upgrade. Let's send that through the network to the development team. And we create this AI learning flywheel that's self-sustaining and that can respond to the changes in near real-time.
So we move from injecting AI into our tasks into, this is the third stage, AI automation. And AI automation is where jobs start to radically change, right? Entire parts of jobs might start to become automated. We spin up what I call the AI impact hub. And this is a small group of people that's really looking at how jobs are changing.
We talked about the rewiring of roles and the impact on people and how new jobs will be created. The AI impact hub, again, a network of hubs in the organization, starts to look at that and say, what is the budget we need? What is the upskilling we need? What are these new roles? That is a major turning point.
We do that at a small scale before we go into stage four, which is scaling those automations on a bigger scale throughout the organization. Now that we have a sense of the impact at a smaller scale, let's roll it out into a bigger scale. Stage four is about rolling that out to a big scale. We're also in stages three and four, we're starting to reorganize around value streams. Functional areas. We go from I-shaped people into very, very broad, even sideways-I people, right? It might be T-shaped, but it might also just be very, very broad people who can do multiple things well. And so we start to organize around our value streams. And then in stage five, this AI-native stage, we have orchestrated value streams.
So we've got some people still in the hierarchy, which are your, like, deep, deep specialists in marketing strategy and legal strategy. But we have a lot of people who are in these value streams, and that's where they live.
Nichol: Great. And with that, what do you think is the... out of those five steps, which is the one that people get wrong the most?
Melissa: Well, I would say I don't know if we know yet. In the book, I say the first three stages are pretty well documented. And the stages four and five are the emerging future. So the case studies are less clear on those patterns, but I think in the first three stages, the part people are missing is what we discussed earlier, which is that lack of support that we need in order to get from where we are today into a more fully automated future.
Nichol: Mm-hmm. And I think one of the things that's happening is that a lot of established organizations and enterprises, they look at those pure-built AI-native organizations and they want to be there right now. And they can't.
Melissa: Mm-hmm.
Nichol: They can't go directly to it. They have to go through this process. Because the pure AI-native companies, and I live in Silicon Valley, so I see a lot of it, they're built from there. So they don't have legacy. And I think one of the things that people misunderstand is legacy isn't just your infrastructure. Legacy includes your current customers. So a lot of people, or you'll see organizations take an entire division and sort of take it out and then rehire it because they couldn't serve their customers without it. And so I think it's important, and I love your work because it helps people understand, okay, there's a process to go to, you can get there, but you have to go on this journey.
So, okay, so last question. If a CEO gives an executive 90 days, we like to be very practical in our last question, to turn around their AI strategy. So, you know, someone in our audience, they get the mandate, 90 days. What should be their primary focus and how should they be measuring their progress?
Melissa: It's a great question and I think if they start to understand this need for dynamic learning and that AI is really applied learning, I would advise a leader at whatever level to take that one workflow. I've got a flywheel: spark, spread, scale, and sustain. And we're really, most organizations are at that spark stage. And in order to spark the meaningful AI in organizations, I would like to see almost everybody in the organization take a workflow, take a really painful workflow, and have your AI champions, if you haven't identified them, start to identify them.
They're out there and have them identify one workflow that AI might help to accelerate. It might be improve the quality. It might help them save time or money. Dissect that workflow, figuring out how to inject AI into strategic points. And the key metric is not measuring tool output, but to measure the impact on the business.
And I think by doing that, you're going to get that spark. You're going to get that flywheel start to move. And in 90 days, if you can align it to your AI North Star. So I guess that's the other piece, right? Identify your AI North Star. If you haven't already, you're going to see measurable movement that you can point to.
Nichol: Great. Great. Well, this has been so informative and so helpful and I think the time of your book is wonderful because people are right at this understanding that the way they approached it over the last two years might not be working. And so I think the world is ready for this now.
So that's it for this week's episode. A big thank you to Melissa for sharing your experiences and insights with us. And for everyone else, thank you so much for joining the conversation and we'll catch you next time.
Show Full Transcript
Success caption
Discover how AI is reshaping hiring, the risks of deceptive practices like deepfakes and AI-fed answers, and seven actionable steps to safeguard your hiring process.
This week: Vibe coding enters the government design, teens’ communication skills falter in the AI era, and a canine longevity startup.
SHRM's AI+HI executive in residence summarizes how AI is changing how work gets done at a fundamental level.
The hiring process is being reshaped by AI, but what’s it really like to be a job seeker in this new landscape?