Valuing the Process vs. the Product in Research
On reasonable resistance to using GenAI in research
My best friend in high school was really into fixing up old muscle cars. One weekend at a car show, he and his dad came across a perfectly restored ’67 Chevelle Super Sport, and his dad asked him if he wanted to buy it. My friend declined, replying that doing so would defeat the point: the value of the car comes through his experience working on it, understanding it, and fixing it. It was not just about owning a badass car.
I have frequently thought about his reasoning over the decades since, and increasingly over the past few years. It seems to shed some light on how researchers think about different aspects of their work: some value the process of research, whereas others value the final product.
This post is, unfortunately, mostly about using Generative AI (specifically large language models, or LLMs) in research, so two disclaimers are in order. First, I am not arguing for or against using LLMs in research; rather, I am simply describing what I observe. Second, I don’t actually read a lot of different people’s commentaries on LLMs, so my comments may be entirely unoriginal.
Old Problems, New Possibilities
We were recently discussing LLMs and research at my department’s annual five-hour meeting1. We joked about how LLMs can be used to generate research questions, design studies, create synthetic people to participate in those studies2, analyze the data, interpret the results, write up the final report, and serve as the peer reviewers. Some people seem to think this is all a great idea—get a publication with a fraction of the work!—and it is pure product-oriented thinking.
It is easy to blame this way of thinking on modern LLMs, but it has been around for a long time. Paper mills existed before LLMs. Salami-slicing and the minimal publishable unit existed before LLMs. The goal of getting articles published for the sake of having published articles has existed before LLMs. The literature in psychology (and many other disciplines) is filled with articles that were produced in this spirit. In their article on AI surrogates, or using LLMs to create participants to complete studies, Crockett and Messeri (2025) trace how the practice extends back to using MTurk to rapidly get samples, and I would add, relying on college students before that. Fast? Yep. Cheap? Sure. Good? No. This is nothing new, now we just have hyper-powered tools available to us.
Your Product is My Process
What makes all of this tricky is that one person’s indispensable process is another’s product. Additionally, “process vs. product” does not describe two different types of people, but rather is task dependent. These are some of the many reasons why it is so difficult to have productive conversations about LLMs and the research process. People’s perspectives are strongly connected to how they conceptualize the utility of each aspect of the research process and how it facilitates deeper thinking.
Consider:
A lot of my past work involved applying structured coding systems to hundreds or thousands of stories told by people about their personal past. This is onerous, tedious work, so obviously I frequently have people commenting that LLMs must be an obvious boon to my research workflow, saving me massive time. This is the same thing that people said to me in the mid-2000s when the Linguistic Inquiry and Word Count (LIWC) program was popular: “Why do you bother hand-coding those narratives when you could just LIWC ‘em?” The reason then, as is the reason now, is that I learn a great deal through the process of hand-coding narrative data3,4.
Believe it or not, actually reading people’s stories provides one with some deep familiarity with the contexts, contents, and humanity within the data, or more accurately, the people we are actually trying to understand. That is why we elicit the narratives in the first place. This is a major reason why many qualitative researchers are horrified by the idea of using LLMs in their work (see Jowsey et al., 2025; but see Ibrahim et al., 2026, for a counter perspective). In contrast, others see narrative data as just another method for extracting information about people to put into statistical models, with little interest in the stories themselves (e.g., Wright et al., 2026). My process is their product.
On the other hand, transcribing interviews? There is no value in that process for me, and I would be happy to make use of any technology that relieves the pain of engaging in that horrendous task. Other researchers, however, might view transcription as an opportunity to deeply connect with the context of the interview as it played out, not just how it is represented in text, and thus would rebuff my dismissal of its value. My product is their process.
Using LLMs to generate analytic code? Most people just want the damn code to work, and don’t care how they got there. A small minority of us seem to actually enjoy wrestling with the code to figure out how to make it work, motivated by that small, sad, spark of joy that awaits us when it does. My three hours with no progress whatsoever is their 30 minutes and an afternoon nap.
Other research tasks involve none of this tension. Building a reference section containing the articles cited in a paper is a great example of where all I want is the final product—a properly formatted references section—and for which the process of construction by hand has zero value. This is why Zotero is my most steadfast source of joy (If you are not using Zotero, stop what you are doing, download it free here, and watch this intro from Dan Quintana). I’m sure there are some misanthropes out there who actually believe that they get something out of assembling references by hand, but their numbers are certainly small.
And on and on. There are many examples of this tension between process and product. Some maintain that “writing is thinking” and value the process and struggle of writing, whereas others are more in the “writing is torture” camp and embrace a world in which they do not have to stare at the mockingly blank white screen before them. Some love reading academic articles, finding utility in the random asides or minor analyses they discover, whereas others can’t read an article without falling asleep and simply want to know what the article was generally about.
Although my “process vs. product” framing is obviously an oversimplification, I find it to be a generally useful framework for understanding the source of tension found in a variety of academic settings.
For example, in teaching, especially with undergraduate students, most instructors value the process of learning, believing that reading the articles and writing the papers facilitates students’ developing a deeper understanding of the material. I think most students probably believe this too, but they are also under immense personal and academic pressures, and at the end of the day they are most focused on passing the class and moving on. After all, they are usually graded on the final product, and not on the process that got them there. Thus, there is a mismatch in values and priorities that underlies much of the current context of higher education.
Of course, the problem is not always about values, per se, sometimes it is about finding support for tasks that we see as difficult or not worth our time. Professors, who write emails for a living, are aghast that students would use an LLM to compose a simple email, but many students experience great anxiety over this seemingly simple task5.
I am, on balance, an unabashed supporter of the research process, and this is because I appreciate how it both deepens and broadens my thinking in a way that having the end result served up on a silver platter simply would not. I also worry that others who similarly value the process will be tempted by the shortcuts readily available to them, and that the end result will be lower quality work. This is the main reason I largely don’t use LLMs in my work. It is not because I am against technological development or think that they are useless. I know what their utility is, but by and large, I see them as more of a hindrance than a help for the work that I do and love.
No generative AI models were used during the precious process of preparing this post. I like em-dashes—LLMs learned it from people like me. Thanks to Kate McLean for helpful feedback on an earlier draft.
They call this a “retreat,” but I refuse to do so.
Yes, people do this, see Crockett and Messeri (2025). And here is one particularly silly example.
Also, they seem to capture something different from what we get when we hand code.
Double also, I have used this and similar approaches when my research question warranted it.
I honestly don’t blame them, given that many professors have strong expectations about proper salutation, punctuation, subject lines, etc.


This is a very useful post, in my opinion! I enjoyed the process of reading it, and the product of that will be me referencing it frequently in conversations and having it inform my thinking.
One reason I really like it is it cites me. With that out of the way, I can say I also really like this pithy semi-alliterated heuristic. Probably because I already use it in my own work and mentoring of trainees. I mean the heuristic, not the catchy alliterated part. I’ll use that now too.
Just yesterday a student was lamenting to me that they spent hours understanding the intricacies of a single variable, recoding it, only to find it mattered for just a handful of cases out of thousands. They were frustrated at the wasted time to get the same outcome. I validated their frustration—we’ve all been there—and then proceeded to explain that the doing of this taught them a great deal that would serve them for years. In other words, the process was valuable even if it didn’t matter for the product.
It was a long-winded explanation. This distinction will help me improve the process of explaining this moving forward so I have a better mentoring product. Thanks!
I am glad this post exists - was refreshing to read such a thoughtful take on llm-use in academic research