Edit: to be clear, I did not write the blog, just posting it here
I wrote my first line of code in 1983. I was seven years old, typing BASIC into a machine that had less processing power than the chip in your washing machine. I understood that machine completely. Every byte of RAM had a purpose I could trace. Every pixel on screen was there because I’d put it there. The path from intention to result was direct, visible, and mine.
Forty-two years later, I’m sitting in front of hardware that would have seemed like science fiction to that kid, and I’m trying to figure out what “building things” even means anymore.
This isn’t a rant about AI. It’s not a “back in my day” piece. It’s something I’ve been circling for months, and I think a lot of experienced developers are circling it too, even if they haven’t said it out loud yet.
I agree with your piece, very nice. I think this is how expert navigators must have felt when technology completely transformed long-distance navigation. A skilled navigator would have been a master of the sextant, spherical trigonometry, they would have had an intimate understanding of celestial bodies, etc. I imagine they would have felt very in tune with the Earth, its curvature, and even its place in the cosmos as they engaged in the high-risk business of navigating long distances across the globe. Then, technology came along and they went from actually navigating to just watching machines do it. I assume a great many of them would have lamented these innovations and would have experienced the kind of hollow feeling you describe in your piece.
I share this feeling with respect to AI and its impact on programming. The only emotional upside for me, I guess, is that I pretty much no longer write SQL anymore. I absolutely loathe writing SQL and I never found the kind of magic in it that I found in working with other programming languages, so I am actually very happy to let the machines take over in that space!
The difference is that unlike the ancient languages, Fortran did not die out; it evolved, and continues to do so.
Randall’s essay resonated in so many ways with my own journey from 1984 through F77, BASIC, C, F90, Mathematica & Julia on MS-DOS, Windows & Linux as I’m sure it did with many others.
I do agree that you learn by doing hard stuff. There are plenty of hard problems in plenty of domains to be solved. If AI can solve them, then perhaps they weren’t that hard. Or worthy.
But it can certainly assist with some of the technical details.
The way I view it is AI lets you focus on the actual substantial parts of a problem rather than having to spend a ton of time working through the same boilerplate you have written a million times over. It still makes sense to roll certain critical path type algorithms by hand especially if they are going to be called billions/trillions of times, but it’s pointless spending that level of effort on some one time dataframe manipulations or a CRUD app. Almost all of the things I use coding agents for now were things that I automated away in the past through libraries and abstractions because I was so tired of having to deal with them. Now I spend my time working on problems where there isn’t a fully conceptualized solution yet. Big complicated system architectures, research type code for applied machine learning problems where there isn’t a well known solution for a given problem. I find myself often combining highly optimized handwritten code that forms the critical path of a system, and then building around that using AI. Also low level systems type code is still very difficult to rely on AI for; if even the slightest thing goes wrong you can have a deadlock or security vulnerability. For that reason I didn’t rely on LLMs when implementing gRPCClient.jl.
I’d say that compared to 10 years ago I’m enjoying coding more, not less. The only thing that has gotten easier were the parts that were trivial to begin with.
At the same time, I wonder if AI automating boilerplate will lead us to forget how to write good libraries and scalable/maintainable software that reuses code instead of duplicating it.
I don’t think the need for that will go away, even if AI does make us worse at it. The foundation you build systems on is still important, and the more things you have AI take care of at once the sloppier job it does in my experience. There is also something nostalgic about cranking out a handwritten well tuned, tested and documented library that solves a problem that wasn’t immediately obvious. AI can still be used as a tool for this for writing tests, documentation, etc.
The distinction here is a library that helps with boilerplate vs a library that solves a problem that people don’t even realize exists, and then they see it and are like “ohhhh I need that!”
Beyond this, I think the actual reason I enjoy development more today vs 10 years ago comes down to the specializing effect these tools have. I’ve worked at small labs and companies most of my career and always had to wear a ton of hats. Setup the infrastructure, write the data pipelines, train the models, write the production code, manage production deployment, handle random little tasks and one off applications there is a need for internally. AI has made it possible to spend more times on the problems that are actually differentiating / providing a competitive advantage as a company.
That resonates with the idea of AI producing technical debt at increasing rates and failing in most task:
I don’t remember another study about how easy is to change code written with AI without it, but when the metrics are changed, it seems that is not as hyper fabulous and disruptive as many want the majority to think about AI. This piece from Center for Internet and Society at Stanford Law School shows another perspective:
My limited usage of AI for coding I ask for specific task, including testing the functionality and I make small commits with that functionality, while avoiding AI in general. So far I feel that I have good understanding of the code I’m writing and I have pretty good localized/minimized the one I don’t. But I’m worry about AI wide spread usage in learning contexts and that’s why I also worried about AI being enabled by default in Pluto and hopefully will address that in its source code repository to speak with @fonsp and Pluto’s developer community, as defaults, by definition, are the way to make decisions implicit and for others.
Hopefully the society will arrive to a more critical adoption of AI in general and in coding and education in particular, to realize the proper transformations where is pertinent, while not treating as some kind of silver bullet for everything.
I have a mixed experience: on one side I’m using Claude to improve codes I know very well, and it helps a lot by providing suggestions, making changes I understand but would take me much more time, and finding bugs. In these cases I feel that it is a very handy tool, like a calculator in the era of abacus. (That changed completely from Chatgpt 4.1, which was useless).
On the other side I have used it to refactor some web pages. I really don’t care about the “code” behind it, and have a poor understanding of CSS/Javascript, etc. In these cases I’m only interested in the result. And it has been really great.
I suspect that my “advanced” first tasks will at some point be handled as the second ones. I don’t see that as particularly bad. If can move to coordinate the development one layer up, and achieve what I want to achieve, scientifically in this case, (for which software is an intermediate), I’ll probably jump all the way into it.