Martin Keen, IBM master inventor and host of IBM Technology’s YouTube channel on AI, recently appeared on SHRM's Tomorrowist podcast. A summary of that conversation appears below.
Executives understand the value that artificial intelligence brings to their organizations, but with the technology’s steep and fast learning curve, the fear of falling behind is real. Martin Keen, IBM master inventor and host of IBM Technology’s YouTube channel on AI, shares his expertise on the technologyI, its implications for executives, and why clarity in communication will be the defining characteristic of tomorrow’s economy.
“English is the new programming language,” Keen states, highlighting that clear, precise instructions are essential when interacting with large language models (LLMs). “The clearer we can be about what we want as the output, the better output we’ll get.” Precise communication is vital not just for technical professionals or those who will use AI for their job—it’s also key for leaders.
Prompt Engineering: A Skill for the Present
The extent to which clarity and clear communication impact AI’s output may surprise many. Prompt engineering—the ability to craft precise and effective prompts for AI—is a key skill in today’s AI landscape. “There are some terms that are quite popular in prompt engineering, like ‘let’s think step by step’ or ‘slow down and take a deep breath,’ ” Keen says. “These affect how the model works, and it will therefore go through some kind of pre-thinking before it will actually output the final response.”
Keen describes it as a blend between engineering precision and writing clarity. Being precise will significantly improve the quality of AI outputs by structuring the model’s thought process. This hybrid skill set underscores the need for diverse talents, not just traditional developers, in teams working with AI.
Shadow AI and Governance Challenges
Many leaders will gravitate toward one of two extremes: an AI free-for-all or banning or blocking AI functions. Both extremes, however, lead to unfavorable outcomes. For example, Keen explains the concept of “shadow AI,” when employees use AI tools without official sanction from their companies. This practice can lead to significant governance issues, particularly regarding data security. “If I am an employee of a company using, let’s say ChatGPT, and I’m not supposed to be, I could potentially be sharing documents that are confidential within the company, and now they’re part of the training dataset for the next version of ChatGPT,” Keen warns. To mitigate these risks, he advocates for clear governance models that regulate AI use within organizations to ensure that data is handled appropriately and securely.
Leaders should become active in structuring AI processes to align with organizational priorities. For example, if data security is critical to operations, you may choose to run an AI application completely within your organization’s digital infrastructure. Keen differentiates between “frontier models” and smaller, on-premises (or “on-prem”) models. Frontier models, such as ChatGPT and Google Gemini, are powerful but require significant computational resources and energy. In contrast, on-prem models can be more efficient and can be controlled directly by organizations.
Opting for an on-prem model may provide the balance between capability and security that your organization requires. Keen suggests that businesses may increasingly rely on these smaller, in-house models for critical tasks, while using frontier models for broader, less-sensitive applications.
AI Concerns: Sustainability, Hallucinations, and Disclosure
AI will significantly transform how businesses operate, but ballooning energy consumption, hallucinations, and disclosure are potential hurdles to watch out for.
Echoing concerns that have been raised about Bitcoin mining, Keen acknowledges that while frontier models will continue to demand substantial energy, advancements in AI chip technology may balance the need for more energy. “There’s the scaling up to build bigger, more efficient, more powerful models that have more capability but are much less efficient in terms of power and CPU usage,” Keen explains. But there are more efficient approaches, too. “At the other end, there’s the idea of building these models so that they are small enough to run on your laptop or on your phone.”
Hallucinations, in which models generate incorrect or nonsensical outputs, are also a concern. “They’re probably never going away,” at least in the near future, Keen states. These errors are inherent to the architecture of LLMs, but technologies like Retrieval Augmented Generation (RAG) can reduce them. Clear prompting is also helpful. “If you don’t tell an LLM that you want the real answer, the truth, how does it know that this isn’t a role-play or a fictional novel or something like that?” Keen argues. “We always need to set the context, and we can use things like custom instructions to really define how the AI should go about looking up its information.”
Lastly, there are ethical considerations of disclosing AI use. In a world where AI touches everything, how is disclosure useful? Keen suggests that disclosure requirements will depend on the extent to which AI contributes to the final output. “We need to think about not just, ‘Is it AI generated or not?’ but what role did the AI play?” Keen says, advocating for a nuanced approach to transparency. “What role did the human play in the ultimate output?”
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement