I am, chronically, a laggard when it comes to jumping on trends. When I first heard Lady Gaga exclaim “Just Dance!” I thought it was just ok. I have still not watched Game of Thrones, mainly because I didn’t feel like engaging with the discourse about dragons. I am very aware of my stubbornness and its subsequent miscalculations.
This is why I decided maybe I wasn’t going to be ‘left behind’ with AI.
The narrative hype around AI is real. It’s going to change everything. The old world is dead! The new world is struggling to be born! Who am I to argue with my esteemed, tenured peers?
So I fired up my Terminal, paid a fee, and starting exploring Claude.
Again, I am not an excitable person. But this was neat! Code happened faster. I was able to make figures to my specifications easier. It excited me because it has the potential for me to spend more time with my ideas, and the ideas of others, rather than prioritizing being a code monkey. (which is fine, but not my bag!) Think of all of the things that I could read instead of spending hours trying to figure out ggplot! I could finally get some of my coauthors to work in R! The possibilities, at least that I could conjure up, are genuinely exciting.
But I also had to tell it what to do at every step of the process. I had to double and triple check its work. I had to have some basic knowledge of R to even make requests. I still was going to be the authority on my work, but now I had to find the ‘right way’ to ask Claude to do something or else it would tantrum and lie to me.
Claude was reminding me of something. Something where I would be responsible but under-credited when the results are presented.
Oh my god, was I Claude’s nanny?!
My male peers might be offput by me calling what it makes of us a ‘nanny.’ But the point of a nanny is the child, who gets a lion’s share of the credit. Misbehavior is blamed on the nanny, or the parents, never the little boy. And work with AI seems to be prioritizing what a special little boy[1] Claude is: look at him try! Sometimes he makes things up, but how impressive! We will weigh the correct things disproportionately to the things little Claude gets wrong! No, we have to let this little guy keep trying, he might change the world! No, no, that vase is not valuable. Someone else can spend the time cleaning it up. You just didn’t ask Claude the ‘right way!’ Look what Claude can do!
Claude is by nature, an enthusiasm machine. You need to be enthusiastic about Claude as a venture for it to work. Not just for you to keep paying for it, but so that you rely on heuristics rather than thinking about the broader implications of using this product. If Claude made you anxious, you might look around and reconsider. You’re plopped into a sea of cozy, enthusiastic, pleasant answers, because anything less might lead you to think more than one step ahead.
AI will let you have material gains. It will let you produce more. Social sciences will change, humans will be less involved, yet somehow knowledge will still be produced – even when the creation of knowledge rests on humans knowing things, not just information being available.
One of the most frustrating aspects of the boosterism is seeing people give up their own agency, their training, to dwell in the enthusiasm machine. Our interaction with any LLM is optional, yet many of these conversations are treating AI as inevitable. Human choices over time create the conditions in which we live, work, maybe even occasionally thrive. Social sciences are, at their core, about studying these choices. What are the actual policy bundles available, how will we make decisions about our lives, and how are we materially or psychologically limited when we make choices? How do the choices at T1 impact the choices at T2?
The enthusiasm pushed by these tools limits our ability to think about future implications, because when we are cloaked in good vibes, we keep on keeping on. We don’t consider the next round, the next interaction, or our own ability to shape the future.
And this is why I think reports of the demise of the social sciences are greatly exaggerated. The future is going to belong to the people with domain knowledge, skills, and determination to look ahead on the decision tree. If we crank out more slop, to get a machine to cite the slop, to get a machine to crank out more slop…that’s not science. It’s definitely not building knowledge or expertise. The use of all of these LLMs rests on the fact that people are still expected to be its guard rails – someone will determine quality. Someone will catch the mistakes. Someone will clean up after lil’ Claude.
We will all have a choice in the matter. It undermines our expertise to throw up our hands because a fancy imputation machine is able to streamline many aspects of our work. But it doesn’t mean that’s what we ought to use it for everything. It doesn’t mean we ought to devalue thinking, writing, or reading. If there’s any optimism to be had about AI, about LLMs, about little agents running around, it’s that at the end of the day the machine is controlled by humans. We get to decide what future we bring about, especially with regards to what we will value and what systems we create. Every time we devalue the work of other people, that is our choice. We should not give that agency over to machines.
[1] The gender klaxons are blaring in these discussions about Claude and other LLMs. The race klaxons are also blaring, but I am going to leave this larger discussion to someone else. Or to myself for another day.
