Lattice CEO Sarah Franklin recently unveiled a controversial feature for the company’s HR management platform that would allow organizations to create official employee records for AI “digital workers.” The plan proposed giving AI agents employee status, complete with onboarding, training, goals, performance evaluations, and assigned managers. This announcement was met with immediate criticism from HR professionals and tech industry leaders, forcing Lattice to quickly backpedal.
While many HR leaders expressed concerns about blurring the lines between human workers and machine learning algorithms, this story actually highlights a real problem that will need solving. It’s a case of addressing the “right” problem but with the “wrong” language, timing, and orientation.
The Coming AI Agent Revolution
AI agents are rapidly emerging, and businesses are unprepared for both the opportunities and challenges they bring. Today’s agents are AI systems with generative AI interfaces that can complete independent actions. On the horizon are multi-agent systems where groups of AI agents can collaborate to execute complex workflows.
With personal and work AIs potentially handling tasks from planning vacations to vetting and paying vendors, we’re looking at a future with billions of AI agents. We must have systems in place to confirm their identity, authorize their actions, and control and manage them.
This isn’t science fiction. Microsoft’s Copilot is an early agent, Salesforce and Workday recently announced a partnership to develop “a new AI employee service agent,” and Capgemini’s Chief Innovation Officer Pascal Brier expects to see multiple autonomous AI agent chains by next year. “They will transition from supportive tools to independent agents with full execution capability, able to understand, interpret, adapt, and act independently,” he said.
The Need for AI Agent Management
As AI agents proliferate, there’s a genuine need for a comprehensive dashboard tool that includes official activity records; easy initiation and training capabilities; objective setting; safety and bias monitoring; and shutdown options. Companies also need the ability to assign AI agent management to a human whose work can benefit from the agent’s extra support while being specifically held accountable for its output. Without human oversight of AI agents, we lose the “human-in-the-loop” accountability that we need to ensure that our values are upheld.
AI agents will also need digital identities to distinguish their actions from those of human users and manage tasks securely and accurately. Okta, a leader in identity and access management, sees a significant business opportunity in providing identity to AI agents, ensuring secure, controlled interactions, such as making payments or booking services, while maintaining clear limits and accountability. Given how AI agents will become part of workflows, it makes sense for some combination of HR and IT to own a platform that allows AI agents to operate within and across a company. In fact, we will need this to ensure ethical and responsible AI implementation.
And if that level of oversight sounds like the kind of oversight one might have with an employee, you can see the logic beneath Lattice’s announcement. The underlying problem it sought to address was directionally correct.
Where Lattice Went Wrong
While Lattice’s announcement correctly identified a coming and soon-to-be-universal need, its approach contained missteps in language, timing, and orientation.
- Language: Machines are not human. People are anxious about AI integration in the workplace. In a world where we sometimes struggle to treat people like people, suggesting we treat machines like people is insensitive to worker concerns. Research by Accenture showed that workers do not trust leaders to make AI decisions that work for them, so it’s no surprise that announcements that blur the line between the dignity of human work and the management of AI agents cause an uproar.
- Timing: Too early for buy-in. We’re early in the adoption cycle, and neither leaders nor workers fully understand how agents will fit into workflows. The need to manage billions of agents with confirmable identities, updatable training and goals, and human oversight isn’t yet widely understood or recognized. Most workers haven’t seen the benefits of augmentation through a suite of AI agents, so today it’s difficult to imagine why we need tools to closely manage agents.
- Orientation: AI integration over human augmentation. Lattice’s announcement focused on the “what,” not the “why.” The purpose of agents should be to unlock the potential of employees by automating and augmenting tasks so that they are free to focus on creative and strategic work. Redefining work as a series of outcomes versus a basket of tasks lags tech, making it too early to gauge if job expansion will result from the added freedom. We simply don’t yet know if or when we will secure the job expansion that historically has come with new technology.
While most agent discussion centers on large enterprises, we shouldn’t forget the impact that agents will have on small to midsize businesses that leverage the technology to compete with large enterprises. These companies employ nearly half of the American workforce, and you can expect that they will adopt agents aggressively to grow their businesses.
Final Score
AI agents are coming and have the potential to revolutionize the workplace. Lattice has identified a real problem that must be taken seriously; however, agent integration must be approached with sensitivity to worker concerns and a clear focus on human augmentation.
Nichol Bradford serves as Executive-in-Residence for AI + HI at SHRM, shaping global thinking on human-AI collaboration.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.