Executives at IQM Software used the most cutting-edge technology to solve an age-old business quandary: how to promote the company without a marketing department or big budget for hiring consultants to create a campaign.
They were especially focused on the dilemma earlier this year when they wanted to publicize their attendance at a trade show and pique interest in their products among eventgoers. The team turned to ChatGPT to solve the problem, and within an hour the artificial intelligence tool generated 12 blurbs that IQM posted on social media. The posts triggered conversations with potential clients, and the project didn't cost anything because the company used the free version of the technology.
"It was insane. I was super, super impressed," says Thibaut Decre, head of strategy at the Rochford, England-based company.
ChatGPT draws on vast data sets and can have humanlike conversations with users that enable it to perform a wide variety of tasks, such as writing computer code, summarizing documents and composing essays. Released last November by San Francisco-based Open AI, it's "the fastest-growing app ever," taking only two months to reach 100 million users, according to Credit Suisse. The Zurich-based bank said it took TikTok nine months and Spotify 55 months to hit the same milestone.
The growth comes despite the technology's potentially serious and dangerous outcomes. It's only as good as the data it has been given, and the information can be incorrect, biased, racist, proprietary or copyrighted. Does IQM know whether the information ChatGPT produced was plagiarized?
"The short answer is I don't," Decre says, adding that he doesn't think the materials were misappropriated because the copy was very specific to IQM. He also notes that there haven't been any complaints.
Newer versions of the technology that require payment can provide data citations, experts say. Still, the problem with ChatGPT and other generative AI products that can produce content is that many employers and employees have fallen in love with the technology's capabilities without considering the potential consequences of its use.
Raising Red Flags
Even tech champions are voicing concern. In March, more than 1,000 industry stalwarts, including Elon Musk, CEO of SpaceX, Tesla and Twitter, and Steve Wozniak, co-founder of Apple, called on AI companies to pause developing new versions of the technology for six months. Musk co-founded Open AI in 2015 and left three years later after clashing with management. He's now building a new company to compete with ChatGPT.
"AI systems with human-competitive intelligence can pose profound risks to society and humanity," said the letter published by The Future of Life Institute, a Cambridge, Mass.-based nonprofit working to steer technology away from extreme risks. AI labs have been "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict or reliably control," the letter said.
‘AI labs have been 'locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict or reliably control.’
The Future of Life Institute
"I find this kind of scary," says Peter Cassat, a Washington, D.C.-based partner in law firm Culhane Meadows. "The caution usually comes from the industries that are threatened by the disruption. This [warning] feels different because the caution is actually coming from the innovators themselves."
No AI companies have publicly agreed to the pause, according to a spokesperson for the institute. However, authorities in Italy, citing privacy concerns, banned the technology. Italy is the only Western country to adopt that stance, though a handful of other nations, including Russia, China and Syria, forbid its use.
In April, the U.S. Commerce Department asked the public for comments on potential accountability measures and policies to ensure that AI systems are legal, effective, ethical, safe and otherwise trustworthy. The agency said it would issue a report after examining the responses. And four other federal agencies recently pledged to collaborate closely to prevent discrimination resulting from the use of artificial intelligence and automated decision tools in the workplace.
ChatGPT has been embroiled in some high-profile troubles. A mayor in Australia, Brian Hood, has threatened to sue Open AI for defamation unless it removes erroneous information in ChatGPT that says he served time in prison for a bribery scandal involving a Reserve Bank of Australia subsidiary, according to published reports. He was the whistleblower in the case, the reports say.
Meanwhile, on three separate occasions, Samsung employees in Korea put confidential company information into ChatGPT while using it to help them do their jobs, according to reports published in April. The employees' actions came soon after Samsung had reversed a ban on using the technology; earlier this month the company reinstated the prohibition, reports said.
The potential for loading proprietary information into the technology where it could be found by others is one reason some companies have banned ChatGPT from their workplaces. It could be a wise decision. Earlier this year a Cyberhaven study of 1.5 million workers over a week found that 3.1 percent of them were pasting confidential information into ChatGPT. That means that hundreds of pieces of propriety data were deposited in the technology, according to the Palo, Alto, Calif.-based data security firm.
Employees are using the technology despite objections by their employers. Nearly 20 percent of the 700 employees who say they have used ChatGPT at work have done so against their employers' wishes, and 23 percent of them don't want their employers to know they use the tool, according to a study by San Francisco-based TalentLMS, a management system.
Yet, amid all the controversy, the tech industry is forging ahead with developing generative AI products. In March, Open AI launched its most recent version of ChatGPT, and Mountain View, Calif.-based Google introduced Bard, its generative AI product. Microsoft, which owns 49 percent of Open AI, has added ChatGPT to its search engine, Bing, and its suite of business software. Several other companies are offering similar tools.
Despite the potential pitfalls, ChatGPT's allure for employers and workers is understandable. The TalentLMS study found that 61 percent of employees who used the technology said it improved their time management, while 57 percent said it boosted their productivity. Of course, some of those employees may be less enamored of the technology when they realize it could put them out of a job. Nearly 20 percent of jobs could be impacted by ChatGPT, according to a study by Open AI and the University of Pennsylvania. Among those most likely to be affected are writers, translators and tax auditors.
‘I didn't think I would see something like this in my lifetime. Its capabilities are growing very fast, and it's becoming smarter and smarter right in front of my eyes.’
Jurgis Plikaitis, chief executive officer of EpicVIN, says ChatGPT will eventually replace some customer service representatives at the Miami-based provider of vehicle histories. Plikaitis says the chatbot can already answer basic questions, and he's working on training it to reply to more complex inquiries.
"It's not going to replace everyone all at once," Plikaitis says. "I didn't think I would see something like this in my lifetime. Its capabilities are growing very fast, and it's becoming smarter and smarter right in front of my eyes."
ChatGPT has already helped the company with its marketing efforts. Plikaitis says he fed the tool the copy on the company's website and asked it to create messaging that would put the company higher on Google searches. He said visits to the website increased 20 percent.
An outside marketing consultant helped with the project, though Plikaitis says that eventually ChatGPT will eliminate the need for that service. "It's a brave new world," he says.