Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

How a Board of Directors Should Lead on GenAI

Directors Roundtable


Global business concept. Silhouettes of business person and communication network.

Because the generative artificial intelligence revolution is a huge disruptor to the way work is being done, directors are being forced to make important decisions on the topic—and to ask the right questions. Dawn Zier sat down with three experienced board members to discuss a director’s essential and evolving role in GenAI.

PARTICIPANTS:

  • Saar Gillai, chairman of Liquid Instruments; director at Semtech
  • Anastassia Lauterbach, founder and CEO, AI Edutainment; director at Cyberion, RiskQ and Aira Technologies. Co-founder of the Austrian and German chapters of Women Corporate Directors
  • Carlyn Taylor, chief growth officer and global co-leader of corporate financing atFTI Consulting; director at Flowserve and The Hain Celestial Group

MODERATOR:

  • Dawn Zier, former CEO of Nutrisystem; director at Hain Celestial Group, Prestige Consumer Healthcare and Acorns

 

DAWN ZIER: Carlyn, you were in Davos earlier this year, where there was a lot of talk around generative artificial intelligence (GenAI). You also oversee AI strategy at FTI Consulting. Can you share some of your key takeaways from Davos?

CARLYN TAYLOR: There’s a lot of discussion about whether GenAI will replace humans in the workplace and in what types of roles. Right now, GenAI is just math. It may seem like it’s talking to you, but it’s really just math and statistics. It doesn’t actually think the way we do; it doesn’t have judgment.

At Davos, there was debate as to whether we’re ever going to be able to create AI that has judgment. There were also conversations around the creation of artificial general intelligence (AGI) and superintelligence, the latter meaning that AI is smarter than humans. Some thought this might be 20 years away; others thought it would never happen.

Many of the AI discussions at Davos centered around regulation with a desire to not overregulate. AI is going to solve many difficult problems for the world and mankind. It’s going to be much more positive than negative. But safeguards need to be in place. Some AI experts said the EU is going to regulate AI to such an extent that they’re going to kill off AI entrepreneurship there. In the U.S., neither political party seems inclined to overregulate it.

 

ZIER: Anastassia, you bring a European and academic perspective along with many years of industry expertise. What are your thoughts? Saar, please bring the Silicon Valley voice to the conversation.

ANASTASSIA LAUTERBACH: I am against the European AI Act for several reasons. It doesn’t solve for any risk of AI technologies. Deep learning is mainstream in machine learning today. As long as the problem is clearly stated and all training parameters are clear, the machine will execute admirably and deliver reliable results.

However, unlike humans, machines don’t understand what is in front of them. This is why we see a lot of “hallucinations” in ChatGPT and its cousins. There will also be issues with biases in existing datasets. For example, there is a gender gap in medical applications, as historically the health care industry developed therapies on cohorts of healthy young male probands. Also, it’s difficult to apply one-size-fits-it-all to the human-in-the-loop problem. Every application is different. Only experts can decide what is contextually valid. Bureaucrats won’t be capable of helping.

Midsized AI companies already spend 25 percent of their revenues on the cloud and 15 percent on data hygiene and pre-processing. Additional expenditures for regulatory compliance might put them out of business. The EU AI Act is now under review by the legislators, but it’s dangerous to have people regulating things they don’t understand. If implemented, the best companies will leave Europe and development will move abroad, leaving Europe competitively disadvantaged. Depending on the degree of regulation, Europeans might have a limited consumer experience if regulators start banning certain services coming into Europe. The Middle East, especially Saudi Arabia, is opening its doors and incentivizing companies to move there to develop AI. Currently, this is the one region worldwide without financial issues, so it offers fertile ground for those looking for investment dollars.

SAAR GILLAI: While coming from Silicon Valley, I also have a military intelligence background. So, while I support innovation 100 percent, there needs to be balanced regulation.

We talk a lot about physical border control, but what about protecting our digital border? Before the web, I couldn’t just bring anything I wanted to into the country; everything went through passport control. TikTok has no passport control. China solves their border problem with the Great Firewall, but that’s authoritarian. However, without digital passport control, we can lose our sovereignty. Balancing digital innovation and regulation, including GenAI, is a complex issue. There’s no silver bullet.

Pivoting to superintelligence, I don’t think we’re close to that. GenAI is taking information and spitting stuff out based on pattern recognition. GenAI lacks wisdom, even though it may seem to emulate it. It’s a very good actor with an authoritative, confident swagger. But in reality, it’s just the next level of data processing—a faster horse.

Many of the best tools are open source and often free. Most companies don’t need to build their own language models. They don’t need to spend a lot of money. The rewards will come to those who are curious. Watch what some of the cutting-edge companies and start-ups are doing. If you’re not playing with AI and experimenting, you will lose a competitive edge.”  

Saar Gillai

 

ZIER: GenAI is a clear disruptor. One could argue it levels the playing field by expanding access to data and insights. Do companies need to be first movers?

GILLAI: We’re at the very early stages of this. I have published a pyramid framework that starts with data followed by information, knowledge and wisdom. The first stage of this revolution is about taking information and turning it into knowledge, in the same way that the first stage of the web was about taking stuff you had lying around and making it easy to access.

There’s also a lot of free, quality GenAI materials available online to help people at all levels understand and learn it. Additionally, many of the best tools are open source and often free. Most companies don’t need to build their own language models. They don’t need to spend a lot of money. The rewards will come to those who are curious. Watch what some of the cutting-edge companies and start-ups are doing. If you’re not playing with AI and experimenting, you will lose a competitive edge.

TAYLOR: I agree with Saar. There is a wide range of maturity within large companies across the world, so don’t be concerned if you’re just getting started on thinking about how to respond to AI. However, the AI revolution is moving faster than the internet revolution. We are at an inflection point. Companies that don’t proactively engage will get left behind.

Disruption is already visible in the creative industries that produce electronic content. These industries will be revolutionized very quickly. There’s also a lot happening in the tech and health care fields and, to a lesser extent, finance, because of how regulated financial services are today. In health care, the most dramatic changes are happening in R&D for new drugs. In financial services, business leaders are taking a more measured approach due to the risks and potential biases around sensitive customer information they are responsible for protecting.

LAUTERBACH: Building off what Carlyn said, GenAI creates huge copyright issues. If GenAI is creating something in the arts or literacy space based on the past works of authors, artists or musicians, is that really original content? We have to be careful of using synthetic data in the medical world to ensure we aren’t enhancing false or misleading information, and we need to still solve the larger issue that the learning models behind GenAI are flawed due to their mathematical architecture.

I do not believe that GenAI levels the playing field. Those who master the infrastructure and those who master the curve game will win. There will be costs around regulation and compliance that could erode margins. It will be hard for smaller companies and start-ups to compete.

ZIER: I think the point about input bias is very important. I was just at a conference where GenAI was being demonstrated, and it was asked to create the image of a CEO, a terrorist and a housekeeper. The inherent biases based on historical stereotypes were visibly apparent. This kind of bias in data can also present itself in talent searches, as there have been instances of women and people of color not “showing up.” We need to be careful that we don’t take an unintended step back.

Boards need to assess whether their companies need to be pushed to get started or slowed down to avoid risks, which depends on the maturity and speed of the company. While the opportunities are exciting, boards should provide a balance to the AI enthusiasm and make sure they are complying with regulations to avoid unintended consequences.” 

Carlyn Taylor

 

ZIER: How should companies and boards begin to think about GenAI?

TAYLOR: The first thing boards should do is make sure that their companies have a strong senior technical expert who understands what AI can do and can work with the business executives to think through best use cases. Second, the company should explore the general tools that are available from Microsoft, OpenAI, Google, etc. to see if any are worth deploying. They should also review open-source point solution tools, which are often more valuable—and free. Don’t invest too heavily or too quickly in one or two solutions, at least not yet. Third, get your data organized if it isn’t already. Data needs to be organized in a way that not only protects confidential information from getting outside your company but also has permissions around who has access internally. Finally, experiment and conduct proof-of-concept tests.

GILLAI: Companies want to bring in experts, but there are no real experts for most use cases. The experts are people with some technical sense who play with it, learn and understand. Technical people in your company should experiment and partner with cross-functional areas on thinking through use cases.

The safest place to start is with your company’s public data, as there’s no privacy risk. Leading-edge companies are putting all the information from their website into a language model and using it to answer product questions. It streamlines the customer service experience while simultaneously improving satisfaction.

Content marketing and press releases are other great use cases. GenAI can do 80 percent of the work, then you need a smart person to finalize the content. GenAI can be an expert assistant, allowing individuals to level up and do more interesting work while improving productivity.

 

ZIER: As directors, how should we be thinking about oversight for AI?

LAUTERBACH: When it comes to oversight for AI, directors need to understand the cyber risk and have clear action plans as to what to do in case of an attack. They need to think through how AI pertains to the company’s business model and how it can drive profitability. Finally, they need to make sure that the data strategy mirrors the competitive strategy.

GILLAI: The board needs to make sure the right structure and processes are in place. Ask who owns it. It should be someone in IT. How are you protecting data? What policies are in place? What contracts are in place, and how are third parties being vetted?

The board will need to provide oversight that balances embracing AI with the right structure and guardrails. Cyberattacks will get more sophisticated with the use of AI, so you will want to revisit corporate controls and make sure that you have double triggers for approvals and other things.

TAYLOR: Boards need to assess whether their companies need to be pushed to get started or slowed down to avoid risks, which depends on the maturity and speed of the company. While the opportunities are exciting, boards should provide a balance to the AI enthusiasm and make sure they are complying with regulations to avoid unintended consequences.

It’s important for leaders in regulated industries or in industries who hold a lot of sensitive customer data to have guardrails in place and experienced individuals monitoring for bias and other risks. Here are some questions boards should be asking:

  • What are the use cases that the management team has thought through?
  • Who on the management team is accountable for the AI strategy?
  • How can we drive productivity?
  • What guidelines and governance are needed around the use of AI?
  • How is data being protected?
  • What permissions are in place?
  • What data is being used to train the AI, and how diverse and representative is it?
  • How do we attract the next generation of AI talent?

 

ZIER: How should directors be using GenAI in the boardroom?

TAYLOR: Boards should not replace their own judgment and thinking, based on years of real-world experience, with the output of a new tool. AI is just one of a variety of methods boards can use to draw inspiration for finding the right questions. I use ChatGPT to give me a quick summary on topics I don’t know much about, but it’s not good for answering specific questions on a specific situation. It’s important to remember that it doesn’t always answer factual questions correctly, so refrain from asking it for specific or recent facts.  

GILLAI: One of the challenges for board directors is to make sense of information they don’t deal with daily and to recall information across multiple meetings spanned months apart. GenAI could be a good tool for sorting through this.

One of the board portals could come out with an overlay that allows you to ask questions such as “What was said on this topic in earlier meetings?” and “How have the numbers changed?” Over time, GenAI should enable the board to ask smarter questions and make more informed decisions, because the information will be at their fingertips and presented in a more digestible way.

With more information readily available, we will need to maintain good governance and remind ourselves of the lines between the board and management. Importantly, GenAI does not give you wisdom or replace director judgment.

LAUTERBACH: I was involved in a huge crisis on one of my boards and what I noticed is that directors love to outsource the thinking to lawyers or to consultants. It would be easy to view GenAI as a tool you could outsource thinking to, but I strongly caution against doing that because GenAI lacks basic understanding. It’s OK to use it as a tool, but recognize that it is flawed, can be factually incorrect and has inherent bias. It’s a long way from replacing our human expertise and instincts.

One benefit coming out of machine learning is that directors can have a real-time view into what is happening with the company, especially in terms of financials and trends. Dashboards will become more robust if the company invests into the data processing engine and real-time application environment.

It would be easy to view GenAI as a tool you could outsource thinking to, but I strongly caution against doing that because GenAI lacks basic understanding. It’s OK to use it as a tool, but recognize that it is flawed, can be factually incorrect and has inherent bias. It’s a long way from replacing our human expertise and instincts.”

Anastassia Lauterbach

 

ZIER: Look into your crystal ball and tell us what the impact of generative artificial intelligence (GenAI) will be on the way we do business five years in the future..

TAYLOR: This is one of the biggest revolutions I’ve seen in my career, similar to the internet. I think GenAI, with the proper guardrails in place, will improve productivity, help automate complex tasks and make it much easier for people to interact with computers, because you can use real-world language to ask the computer to do things. But GenAI won’t replace thinking, judgment and creativity. It can free up more time and resources for people to focus more on the important work they do and less on mundane tasks. It will make us quicker and more efficient.

GILLAI: I agree that GenAI will have a massive improvement in productivity because it will take away a lot of manual, repetitive work in the same way Excel did when it came out. The machine will do a lot of the data work.

GenAI will level up people in the workforce. A lot of people who are very smart are not necessarily great writers, don’t create good presentations and don’t know how to best organize their thoughts. GenAI can help them do that today. Everyone can have an executive AI assistant at their disposal.

I also think our education system must change. We will need to teach people how to think critically and how to gain knowledge and wisdom. They won’t be able to gain knowledge at work in the manner a junior person traditionally has, because AI will be doing that work. It will be similar to the way we teach people math when they never actually have to do the math.

LAUTERBACH: GenAI can provide broader access to the masses in the form of education and creativity tools. It has the opportunity to excite children early on to enter the world of AI to study neuroscience, computational science, computing, engineering and linguistics. This will have a positive impact on the future workforce as we will have more diversity, which will reduce the risk of biases and optimize AI for everyone.

Dawn Zier

The Directors Roundtable was hosted by Dawn Zier, the former CEO of Nutrisystem and a current board member at Hain Celestial Group, Prestige Consumer Healthcare and Acorns.