• As the most comprehensive resource available for those involved in technology-based economic development, SSTI offers the services that are needed to help build tech-based economies.  Learn more about membership...

Research provides insights into how employees are using AI and their concerns about the technology

By: Michele Hujber

If you’re leading a knowledge work[1] organization and considering introducing generative artificial intelligence into your workflow, it likely would be helpful to know how its use may impact the day-to-day aspects of your team’s work, and the potential risks involved. 

A recent study by researchers at the University of Chicago and Argonne National Laboratory provides a resource for understanding organizational adaptation of generative AI. They surveyed science and operations employees in the lab to learn about their perceptions and concerns of the potentially transformative or possibly disruptive technology. With responses divided between science and operations workers, the researchers then conducted follow-up interviews with one-third of the group.

Their findings are relevant to knowledge-based organizations beyond national labs: the dichotomy between knowledge specialists and operations workers can be found in many science and non-science organizations. Also, similar to banks and government institutions, national labs regularly deal with sensitive data that heightens concerns and real risks regarding privacy and security.

When surveyed employees were interacting with large language models (LLMs[2]), they were found to be using them most often to write code: 9.1% always used the technology to write code; 13.6% used it often for this purpose; 15.2% used it sometimes. Three percent always used the technology for email writing; 9.1% used it often to write emails; 22.7% used it sometimes. One-point-five percent used the technology for all their grant writing, 1.5% used it often for that purpose, and 1.5% used it sometimes. No one reported always using the technology for science communication for the public, but 13.6% reported using it often and 3.0% reported using it sometimes for that purpose.

The writing for which respondents relied on LLMs was described in the paper as compositions that follow a standardized format, or structured writing. According to the authors, "participants described that they already had the content needed for the writing, but they used the LLMs to craft the appropriate tone and format.” An IT employee asked the LLM to "make this more formal, make this less formal, make it friendlier." An operations manager asked the LLM to “make this sound better or make this more professional or make this more succinct.” A scientist manager used the technology to make emails “more formal and less scientific.” Some reported using the technology to "tone down rage" they had expressed in their original drafts.

The researchers found that scientists also used LLMs to help write research paper introductions, which are also examples of structured writing. The authors emphasize that “these were not cases of the LLM generating the research content but reframing existing content to fit the typical structure of a research paper introduction.” 

A survey of 5,000 academics by Nature found academics’ acceptance of AI for drafting different sections running along the same lines as the Chicago University-Argonne researchers had found. Twenty-three percent of Nature respondents said using AI was appropriate to draft abstracts and 14% said it was appropriate for drafting introductions. Fewer respondents responded that AI was appropriate for drafting methods, results, or discussion/conclusion section (11%, 8%, and 10%, respectively).

The Chicago University-Argonne researchers identified two types of generative AI used in the lab: 1) systems that work on tasks with the user and provide responses conversationally; and 2) systems serving as workflow agents, when an AI system performs complex tasks mostly on its own to support a user's work. 

The opening examples of writing assists are examples of the first type of use; the researchers also found examples of employees using generative AI as a workflow agent.

 One operations employee, a safety expert, was using automated workflows to perform instrument checks. He told the researchers that he would not have been able to automate the work without AI, which wrote a Python script to control the unit’s software programs. “I would never have been able to [write the code] without significant time investment, and the fact that I could produce a working app in a couple of days was impressive to me,” this employee said. The authors noted that “(w)hile [the employee’s] use cases are specific to his group, the broader ideas he described apply to numerous (o)perations roles such as automating old software tools, writing an automation software program without coding skills, and simplifying database searches.”

The researchers unearthed many concerns about generative AI. At the top of the list is AI’s tendency to hallucinate false information. Privacy and security and copyright and plagiarism were also significant concerns. There was also some fear that AI could replace some jobs. The authors expanded on some of these risks and illustrated how they surface uniquely in both a science and an organizational context. 

Some of the key areas of concern about the use of generative AI found in the Argonne and other studies are summarized below

Reliability. The researchers found that “the most significant barrier to adoption in a science organization is generative AI’s lack of reliability and tendency to hallucinate, as well as the fact that it does not [consistently] cite [reliable] source material.”

Overreliance. Interviewees, both from the scientific and operational sides, told the researchers that they were concerned about other users who did not understand that LLMs aren’t always correct and may treat incorrect information equally with valid information. Several interviewees mentioned the risk of misuse during the hiring process. One scientist voiced their fear that “very smart researchers” might try to use LLMs as search engines to verify information.

Privacy and security for unpublished, classified, and proprietary data. All of the interview participants and 42% of survey respondents had some privacy and security concerns with LLMs in their work. For many, though, the concerns were eased when using Argo, Argonne’s private LLM. However, the authors note that there are LLM models available that are more advanced than Argo or the other LLMs available to employees and that “(while) participants who were using public models said they were being careful, from an organizational perspective, this could be a security threat.”

Academic Publishing in the Era of LLMs. Many scientists who were interviewed and 20% of survey respondents expressed concerns about researchers using AI to write academic papers. (SSTI learned from a paper published in PLOS Biology that paper mills—defined an article in The Conversation as unscrupulous businesses that produce fraudulent research papers—are “using AI-assisted workflows to introduce [hundreds of] low-quality manuscripts to the scientific literature.”) All survey respondents and interviewees at Argonne Labs asserted that generating fake research was unacceptable, but most agreed that some paper editing with an LLM was more defensible. 

Impact of Generative AI on Jobs. Argonne scientists mostly thought that generative AI would help them in their jobs, not replace their positions. Similarly, participants in technical or specialized roles in science and operations viewed generative AI as something that would speed up their work, not eliminate their jobs. Participants in both groups expressed concern that AI would take over less technical or specialized jobs. Hiring managers thought AI would change the skills they looked for in workers but not the total number of workers they hired.


 


[1] The authors define knowledge work as a classification of labor involving producing information-driven products and services as the key economic output. They note that professions considered knowledge work include data science, law, marketing, and finance.

[2] LLMs represent just one category of generative AI, focusing specifically on text generation, according to information from the Schwarts Reisman Institute for Technology and Society at the University of Toronto.