(I am not sure whether questions like these are allowed here)
Hi everyone.
What is our opinion on the future of scientific computing careers? Given that AI has become so good at writing code, finding bugs, and optimizing performance, and it is likely to get better, what should a STEM student or an early-career computing researcher focus on to stay relevant?
Is learning a programming language to the deepest level (Julia, C++, Python) still a sure way to a lucrative career? Or is that somewhat obsolete because one can use the AI output to learn stuff “on the fly”, via prompts like “What does this syntax mean? Why is it better to do it like this rather than…”
Should one focus more on software design principles or system’s architecture?
Should one focus more on the science, eg., to be able to read and understand research papers to certain topics (and do research oneself)?
Good questions, and I don’t have an answer to them. Just my personal experience and comments.
At the moment, AI is a great tool for developing software, but in my opinion it’s just that: a tool.
And as with any other craft, new tools may greatly change parts of the process, may make some steps obsolete, and may require a new set of skills to use them effectively. But learning how to use the tool doesn’t replace learning the craft itself, if that analogy makes sense…
Learning about the fundamentals of software and programming (regardless of the language) is still nontrivial and in my experience you need to have a good enough understanding of those fundamentals to judge if the approach an AI would suggest is really going in the right direction – not just in programming, but with all other prompts as well. My guess (and indeed hope) is that programmers who have this kind of understanding will stay in demand in the foreseeable future.
For me, I think having a strong foundation in linear algebra, calculus, data structures & algorithms, and probability & stats still seems like a fairly future-proof skillset. These things show up throughout the fields of AI, data science, quantum computing, etc. What I’ve been focusing on (as a mid-career data scientist) is making sure I understand how these new models and tools work. I think it gets risky for those who are happy to treat AI as a black box and not have an understanding of how it works, or (even worse) not understand the fundamentals of the problems they are using it to solve.
I have dedicated my life and entire career to neural networks and machine learning and I agree with this wholeheartedly. At the same time I see more and more usage of LLM tools as a crutch. I hope I’m wrong but at this rate I believe we will see a real reduction in innovation and understanding of the computational and physical sciences.
We will have a product explosion for sure with applications being built faster than I can sneeze. But the understanding of how these applications and their underlying algorithms work I think will fade with every new iteration of the agentic coding paradigm.
I for one will never stop coding myself. Nor will I stop doing physics with pen and paper. This is not due to some belief that I will always be better at it than the LLMs. I will simply do it because I love it.
So learn all you can about how agentic coding actually works. Learn context engineering, tool calling and experiment like crazy. On top of this do what you love.
I don’t think people who are seriously assessed on their productivity will be able to get away with not using agentic coding tools, particularly for early proof of concept work. It is also false to say that these tools are incapable of writing advanced mathematical code.
What is still required (in my field) is a strong understanding of and gut instinct for statistical modeling, the ability to see the forest through the trees and design a research project/program and guide the agent through that while being able to critique/test/validate the work.
Because at the end of the day no one cares how working and validated code was written.
Personally, I write very little code nowadays. I’ve taken up drawing as my “human” creative outlet. Code is utilitarian at the end of the day.
Yes, we will need people who can understand the mathematical nuts and bolts of all these tools. But frankly, the number of people for whom that is legitimately true is a vanishingly small percentage of the total number of applied users who are just trying to build something useful.