Five Questions Concerning the Literature Machine and Computer Programming
The following is a loose collection of thoughts and questions that deal broadly with how the advent of LLMs and their continuous improvement are poised to influence and shape our industry or are already doing so. A few of these have at least partial answers while for most I have not yet arrived at a satisfying conclusion for myself. As such I expect to update this document continuously over the following months. I mostly limit myself to aspects of the ongoing discourse which I feel are currently underdiscussed.
-
Will software engineering simply be less fun?
Until now computer programming entailed, besides the obviously necessary knowledge about computer science, minute knowledge of lots of adjacent technologies. As of right now these seem to lose in importance significantly over the coming years. For someone who does not strictly take pride in this knowledge and the ability to use these tools productively, but rather derives joy from it, this shift troubles me.
Naur already argued already in the 80s that the essence of programming lies in theory building and that the exact methodology applied - the how we work - does not really matter that much and is a rather adjunct part.
Peter Naur, Programming as Theory Building It seems we are faced anew with the question of what exactly is computer programming and have yet to find an answer, for us as individuals and as an industry as a whole. -
Will the lack thereof or at least a significant reduction in manual labor mean the loss of hard-won earned unembodied knowledge?
See Objective Knowledge and the Three Worlds Ontology for a short introduction to what Popper describes as unembodied objects. Creative and intellectual work involves inspiration, which does not conform to or follow strict rules and cannot be predicted. Getting rid of the majority of practices that constitute our work might lead to the loss of something valuable that has so far only existed in the intangible.
-
What is the worth of moving slowly?
For a long time the conventional wisdom was that in order to improve as an engineer one has to work on implementations themselves if they really desired to understand the underlying theory behind it. I found this to be true for myself, but now development speed seems to be the prime factor occupying the minds of even the most senior members of our trade. I question if real learning is truly possible if all that's left for us as humans is delegation and the actual implementation will be left to the machine instead. While we might churn out code faster than ever, I'm not sure if our brains can necessarily keep up with that pace.
On the other hand, moving fast and breaking things has been the ethos of Silicon Valley for as long as most of its members might remember and LLMs accelerate this dynamic, which above all rewards the quick and not the conscious, even further.
-
How can we make sure that we as a field advance in a way that allows for the continuous existence of its labor force?
While the question could be interpreted as speaking about the obvious economic implications of the further automation that this new technology seems predetermined to bring upon us, what I want to focus on is something different.
Adjacent to the previous point, LLMs have a tremendous potential to facilitate learning like nothing else before. They are correctly described as stochastically working and almost certainly lacking a real understanding of the underlying concepts of the given problem they are working on. But as models are evolving almost faster than we keep track of, it is becoming evident that especially when working on problems of limited depth their rate of failure is surprisingly low. Consequently private tutors for everyone - historically a luxury reserved only for the children of society's most prosperous - are already a reality today. When used responsibly they undoubtedly have the ability to level the playing field of education even more.
But, learning requires attention and time. Sporadically reviewing the product of that probabilistic slot machine, or going even further, removing oneself from the loop entirely, erodes the attention necessary to reflect on unsolved problems and reach conclusions on our own. Refreshing social media feeds seems to have a similar effect as we no longer truly engage with content intellectually, and instead push ourselves towards maintaining a superficial distance. I fear that we as humans lack the necessary restraint to not fall victim to exactly this dynamic now in the context of intellectual work.
The fact that this concern doesn't seem to be widely shared among others in our industry who are more experienced is troublesome to say the least.
-
Will this contribute to the existing social alienation felt by many and lead to decreased collaboration between us?
Superficially broadening the knowledge of individuals might obsolete the necessity of collaboration between them. Paraphrasing from some private correspondence: we may be trading the messy, generative friction of human collaboration for the frictionless isolation of individual sufficiency. Here it is, again, questionable what the lost long-term value of this convenience might be.