Each week, as SHRM's executive in residence for AI+HI, I scour the media landscape to bring you expert summaries of the biggest artificial intelligence headlines — and what they mean for you and your business.
This week shows both the promise and the pitfalls of AI at work. Leaders describe a future of entirely new industries, experiments prove AI can boost human skills, and policy experts argue for a workforce strategy that matches the scale of innovation. Yet, beneath the optimism, surveys and studies reveal adoption without trust, training, or discipline — ending in wasted time and eroded productivity.
1. Ben Horowitz on AI, Culture, and Succeeding Through the Next Wave of Innovation
What to Know:
Andreessen Horowitz co-founder Ben Horowitz said AI's true disruption will come from industries and jobs no one predicts. He likened today to the spreadsheet's invention, which unexpectedly enabled private equity. Early signs are in Hollywood, where directors use AI models to cut costs and writers use them to refine dialogue. These shifts expand, rather than replace creative work. Horowitz stressed leaders should expect new sectors and occupations beyond current imagination.
Why It Matters:
AI's main impact may be new job categories, not efficiency in old ones. Leaders must shape cultures prepared for industries that don't yet exist.
That vision of unexpected industries depends on today's workers building new skills — and AI is starting to show how it can help.
2. How Generative AI Could Transform Learning and Development
What to Know:
A BCG Henderson Institute experiment compared Generative AI (GenAI) tutors with classroom training for mid-career professionals. Both produced similar learning gains, but AI tutors offered 32% better personalization, 17% better feedback relevance, and 23% faster completion. Nearly half of learners engaged in reflective dialogue with AI, and 53% preferred it to classroom formats for on-demand practice. Learners with lower starting skills saw 32% higher gains than classroom peers. The authors see scale applications in frontline coaching, culture change, and AI competence building.
Why It Matters:
Gen AI tutors can deliver personalized, scalable human-skills training that current programs fail to provide. Evidence suggests AI can help close gaps in problem framing, collaboration, and creativity, turning machines into enablers of human capability.
But scaling those gains requires more than pilots inside companies. At a national level, workforce development itself needs a reset.
3. The US’s Missing Productivity Strategy: An R&D Approach to Workforce Development
What to Know:
U.S. workforce development remains treated as anti-poverty policy rather than a productivity strategy, leaving programs underfunded, fragmented, and misaligned with technology sectors. Federal spending on workforce services has dropped two-thirds since 1979 and sits at less than 0.1% of GDP, far below its peers. Rachel Lipson, co-founder and Scholar in Residence at the Project on Workforce, and author of the Aspen Institute paper, argued for reframing workforce development as research and development (R&D): prioritize sectors critical to competitiveness, fund proven models while supporting experimentation, and focus on long-term returns. Lipson outlined three categories of roles to target: frontier jobs created by new tech, retooled jobs requiring upskilling, and legacy jobs facing mass retirements.
Why It Matters:
Without a workforce strategy, the U.S. risks bottlenecks in AI, semiconductors, energy, and defense. Treating human capital like R&D could expand opportunity, align with industrial policy, and convert innovation into sustained productivity growth.
While strategy debates play out, AI adoption on the ground is surging — and exposing cracks in how work actually gets done.
4. Google Says 90% of Tech Workers Are Now Using AI at Work
What to Know:
A Google DORA survey of 5,000 tech professionals found 90% use AI on the job, up 14 points from last year. Most rely on it for writing and debugging code, but trust remains limited: 46% said they "somewhat" trust AI output, 23% trust it "a little," and only 20% "a lot." About 31% said AI slightly improved code quality, while 30% saw no impact. Entry-level workers are struggling — software engineering job listings fell 71% since early 2022, and unemployment for new computer science grads now exceeds that of art history majors. Google embeds AI across teams, but even advocates admit much adoption comes from hype.
Why It Matters:
AI has become unavoidable in tech, yet its benefits are uneven and its risks are highest for newcomers. The pipeline of junior roles is collapsing just as AI tools spread through core workflows.
The spread of AI tools without clear trust or career pathways feeds a broader governance gap inside companies.
5. Employees Left Behind in Workplace AI Boom, New WalkMe Survey Finds
What to Know:
WalkMe's 2025 survey of 1,000 U.S. employees found that 78% use unapproved AI tools, creating security and compliance risks. While 80% said AI boosts productivity, nearly 60% admitted it often takes longer to figure out than to do the task manually. Half reported conflicting guidance on AI use, and only 7.5% have received extensive training — 23% received no training at all.
Cultural stigma compounds the problem: 45% of workers pretended to know how to use AI, and 49% denied using it to avoid judgment, with Gen Z (those born between 1997 and 2012) most likely to hide or fake usage. WalkMe estimated companies lost $104 million on average in 2024 from underused tools and poor rollout.
Why It Matters:
Shadow AI is now the norm, but without training and governance, it delivers risk instead of return. Companies that close the enablement gap will capture value; those that don't will keep losing time, money, and trust.
And when adoption runs ahead of enablement, the result is not productivity gains, but busy work disguised as progress (i.e., workslop, my favorite new word).
6. AI-Generated 'Workslop' Is Destroying Productivity
What to Know:
BetterUp Labs and Stanford Social Media Lab identify workslop — AI-generated content that looks polished but lacks value — as a growing drag on productivity. In a survey of 1,150 U.S. employees, 40% reported receiving workslop in the past month, averaging 15% of all content.
Each case costs almost two hours to fix, equating to $9 million in annual losses for a 10,000-person firm. Beyond time, it erodes trust: 42% saw senders as less trustworthy and 37% as less intelligent. Indiscriminate AI mandates amplify the problem, while high-agency "pilots" use AI purposefully and "passengers" lean on it to avoid work.
Why It Matters:
Workslop shifts efforts downstream and undermines collaboration. Without guardrails, AI adoption risks creating busy work instead of value. Leaders must set norms for intentional use.
Was this resource helpful?