I often hear from people that AI will make coding skills useless and AI will make programmers jobless. I personally like programming but fears about future.
Should we learn programming in the age of AI when AI is becoming stronger?
How should we adapt or change our perception and goals about learning programming in AI era?
Should we keep alternative carrier options also?
Will experienced software developer start focusing on LLVM/Assembly language? Will AI be used in low level instruction also?
Always keep career options open. Because you never know. Think of AI as a force multiplier. That means you must have some force to apply, just as a project manager for a programming team has to understand the nature of the work to be able to guide it.
I am doing scientific programming, and for this, a good background in some field is required. I am good at physics, for example. In science, software is a tool, AI is a force amplifier, and the field in which you do research (e.g., physics) is the foundation.
This year, I spent some time to figure out what AI is already useful for, and what it cannot do effectively (yet). This is an important skill nowadays, to give only tasks to the AI where there is a good chance that AI can solve it successfully. It works reasonable well for writing tests and for writing code to create plots, for example. But you still need to be able to proof-read and understand the AI code.
My take on this is that you should definitely learn a few programming languages to understand code and compilers in general. I would go so far as to say that fundamentals have never mattered more than they do today. Not because you will actually be doing much programming yourself in the future (I believe a lot of the actual coding will be done by assistants) but to know how to design, think about and criticize code.
Learn:
discrete math, algorithms, data structures and some calculus.
some basic electrical engineering
a few CPU and GPU architectures
operating systems and compilers (one or two is enough)
C, Julia/Python, and Bash
This I believe we all have to know in addition to our own field.
I am no fan of vibe coding something you couldn’t have done yourself given enough time.
AI can eliminate the need to type the code manually (almost), but it does not eliminate the need to know a tech stack which you use. AI amplifies your abilities and extends the range of your expertise. It does not replace human specialists yet.
I strongly disagree! “given enough time” is ambiguous. With AI assisted, it can help developer overcome the time-constraint barrier much easier. Take myself as an example, even I know I can developed a numerical package like AcceleratedDCTs.jl in a generous time frame such as a month or so, I will not do it in the past. The possiblity of failure in devlivering the actual pacakge in time is very high. Now, I can first communicate with AI and together draft a realistic implementation plan and foresee whether I can finish the project. Actually, it only takes a weekend to develop a well-shaped AcceleratedDCTs.jl with the help of AI.
It sounds like we’re in agreement. If I understand you correctly you’re saying that you indeed could have done it in which case I think it’s fine.
To further explain my reasoning: In my opinion, if you couldn’t have done it yourself (given all the time in the world if you’d like) you don’t understand what was done and that is a dangerous thing. Would you be comfortable developing control code for a nuclear power-plant this way?
Of course, most of us do not develop critical code (of that magnitude), but I still think that the thought experiment put on it’s edge has value and should be considered.
We as scientists, software developers etc have a responsibility towards the next generation. If we produce bad code today, someone will suffer the consequences in the future.
In no way am I against vibe-coding or ai assisted coding. I just think we should be vigilant about the way we use it.
My personal take on it: it won’t. AI is pretty good at writing single-use scripts, providing auto-complete or generating documentation drafts, but bad at everything else. AI cannot develop a maintainable package on its own – at least for now, and I do not believe it will anytime soon.
When I use agentic AI in scientific computing, I always end up patching the code it outputs. That’s because the AI takes the dumbest approach possible and ignores the idiomatic way to do things (always writes nested loops, ignores memory allocations, etc etc etc). This of course may (and will) change in the future, but the AI will still require supervision. And to do it properly, you will still need to learn how to code.
That’s just not true, not only “single-use scripts”. AI seems to be a force-multiplier for great programmers, as with Rue language made. Maybe not or not as much for other programmers, can accumulate a lot of technical dept, at least in lesser hands.
AI also translated the original Photoshop version 1.0 (it’s been made open source), from Classic macOS, non-portable, written in Pascal and Motorola 68000 assembly to C# (I can’t find the link on it back right now; I would be skeptical, maybe the AI just hallucinated Photoshop without looking at the original code?).
See my answer here, and that thread I started too:
Rue is also an experiment in human-AI collaboration. The language is designed by Steve Klabnik and implemented primarily by Claude, an AI assistant.
I see now he even had some commits then, in his name only, just fewer.
I don’t want to dismiss your experience, maybe AI is still better at some programming something other than scientific (or you’re doing it wrong, not spec driven in the right way), and only good for some areas, like batch-oriented compilers, maybe ideal? Kimi 2.5 likely changes with vision capabilities added. It’s amazing what AI has been doing so far basically blind, I would compare more to blind programmers (that exist and are amazing!). Kimi Code is intriguing, and Kilo Code (a fork of a fork), I may start out myself with Claude Code.
I highly recommend reading the full blog post he wrote, and the subsequent two blog post on the language that the Claude AI wrote:
But 2025 also brought one more change in my life: I went from thinking that AI and LLMs were stupid and bad, to being at least useful. For the purposes of this post, I don’t really want to get into the details, because they’re not relevant, but I’m sure I’ll write more about that elsewhere.
..
What if Claude could write a compiler?
Second time’s the charm
However, last week, I had some spare time… and I decided to start over. I have a lot more skill with Claude than I did half a year ago.
..
Tonight, I’d just like to bask in the fact that I got a baby language from zero to “core basics of a language + spec with two different codegen backends” done in roughly a week." That’s wild to me!
ADR-0023 introduced multi-file compilation with a flat global namespace—all functions, structs, and enums are globally visible across files. This was explicitly a stepping stone:
..
Research Summary
We analyzed module systems from several languages:
Language
Key Insight
Zig
Files are structs; lazy analysis skips unreferenced code; simple pub/private
Complaining about borrow checking in Rust is like complaining that the chocolate ice cream you asked for contains ingredients derived from cocoa beans.
This looks like I was complaining about the borrow checker, but I wasn’t. I wasn’t even complaining about the module system, just quoting from a design doc. I believe the borrow checker is the main innovation in Rust (I at least didn’t know of it previously, if taken from another language), but note there are alternatives (also if I recall in Vale; Pony is also an interesting language):
Rue aims to provide memory safety without Rust’s borrow checker complexity. The key insight from Val/Hylo is that mutable value semantics can provide similar safety guarantees with a simpler mental model.
Rue uses inout parameters (like Swift) instead, explained in that doc.
I think Rue might actually be a very good system programming language, better than Rust:
Rue currently has no undefined behavior. All operations in Rue have defined semantics: they either complete successfully, fail to compile, or cause a runtime panic.
It might seems offtopic here in a thread I didn’t start in offtopic, to talk about programming languages, other than Julia, but I believe AI frees you from choosing any one language you’re familiar with, so it seems relevant.
I tasked 16 agents with writing a Rust-based C compiler, from scratch, capable of compiling the Linux kernel. Over nearly 2,000 Claude Code sessions and $20,000 in API costs, the agent team produced a 100,000-line compiler that can build Linux 6.9 on x86, ARM, and RISC-V.
The generated code is not very efficient. Even with all optimizations enabled, it outputs less efficient code than GCC with all optimizations disabled.
So it’s a more of a stress test of Claude rather than a viable product.