AI’s impact on labor is now being tracked, debated, overstated, and operationalized. The gap between performance and perception is widening. So is the distance between those building AI, deploying it, and owning the outcomes.
1. Anthropic Launches Economic Futures Program to Track AI’s Labor Impact
What to Know:
Anthropic has announced the Economic Futures Program, a new initiative to fund research, develop policy, and expand measurement around AI’s economic effects. Built on Anthropic’s existing Economic Index, the program will provide rapid research grants up to $50,000; host policy symposia events in Washington, D.C., and Europe in fall 2025; and scale public datasets tracking AI’s impact on work. The three pillars of the program are: research grants, evidence-based policy forums, and expanded economic measurement.
Why It Matters:
AI is reshaping work faster than most systems can track. By investing early in longitudinal data and policy engagement, Anthropic helps define how we understand and respond to these shifts. The program frames AI not just as a technical transformation, but as an economic one that demands real-time, grounded analysis.
But measurement isn’t the same as mitigation.
2. Federal Reserve Chair Powell Says AI Is Coming for Your Job
What to Know:
Speaking to the Senate Banking Committee, U.S. Federal Reserve Chair Jerome Powell said AI hasn’t yet reshaped the economy — but it will. He called its potential “enormous” and predicted “significant changes” to work and productivity, though likely on a delayed timeline. While current studies show limited disruption, Powell noted signs of acceleration, including BT’s plan to cut up to 55,000 workers by 2035 and Salesforce’s report that AI now handles up to 50% of work in areas such as coding and support.
Why It Matters:
Powell was blunt: “We just have interest rates.” The Federal Reserve can’t address AI’s labor fallout. As companies scale automation, broader action will be needed. The shift is already underway.
And many of the tools being deployed still don’t work well.
3. AI Agents Get Office Tasks Wrong Around 70% of the Time
What to Know:
AI agents are struggling with real-world tasks. In a benchmark for evaluating AI agents, researchers at Carnegie Mellon found that top agents such as Gemini 2.5 Pro succeeded on just 30% of knowledge work tasks. Salesforce researchers reported even lower rates for multiturn interactions, and near-zero confidentiality awareness across all models. Gartner estimated that over 40% of agentic AI projects will be canceled by 2027 due to low return on investment and exaggerated marketing claims. In reality, many so-called “AI agents” are just rebranded robotic process automation or chatbots with little autonomous capability.
Why It Matters:
Agentic AI is still far from reliable — especially in complex workflows. While some models show promise in narrow use cases such as code generation, performance drops sharply in general office tasks. Despite the hype, most systems can’t yet deliver on their automation promises.
Still, some companies are betting big anyway.
4. AI Is Doing Up to 50% of the Work at Salesforce, CEO Marc Benioff Says
What to Know:
Salesforce CEO Marc Benioff said AI now handles 30% to 50% of the company’s workload. He called this shift a “digital labor revolution” and estimated the company’s AI tools are operating at about 93% accuracy. The workforce transformation includes layoffs — Salesforce cut over 1,000 jobs earlier this year — as the company restructures around AI. Other tech leaders, including Amazon and Klarna, are also reducing headcount in part due to AI investments.
Why It Matters:
AI is no longer experimental at Salesforce — it’s foundational. With high-volume deployment and executive endorsement, it’s becoming a core part of the operating model. The scale of adoption sets a precedent other enterprise leaders will be under pressure to follow.
That’s why ownership models are starting to matter.
5. 5 Ways Cooperatives Can Shape the Future of AI
What to Know:
AI is largely controlled by a few tech giants, raising concerns about equity, privacy, and labor practices. A recent Harvard Business Review article on AI and machine learning proposes AI cooperatives — democratically governed, community-owned entities — as a viable alternative. The authors highlight five roles co-ops can play: democratizing data governance, linking research with civil society, advancing education, building alternative ownership models, and tailoring AI for cooperative goals. Examples include MIDATA’s citizen-led health platform and Transkribus, a member-run AI tool for document transcription.
Why It Matters:
Cooperatives lack scale and capital, but with public support, they offer a path to more inclusive, accountable AI. Governance and ownership structures — not just technology — will shape who AI benefits.
Was this resource helpful?