I’m not skeptical of the paper and it’s claims, but the way it’s being presented in the media really distorts it.
I work with a lot of people who are doing very similar things for very similar problems. In the use cases I’ve seen, you have a computationally expensive full physics model that’s being run all the time, each time with only very minor perturbations to the inputs. If these models can’t run in real time for some applications, it makes a lot of sense to just run it a whole lot in advance and use some sort of machine learning/statistical method in advance to get something like an approximate lookup table.
I would say somewhat unfortunately, Deep Learning is being the method that is being chosen for this all the time. I saw unfortunately because one problem with this is that it is really hard to tell if you are making predictions in an area with proper coverage, as compared to something like a radial basis function, or better yet Gaussian Processes. It’s not the end of the world to use Deep Learning in such cases, especially if you are convinced that your training samples should have proper coverage for application, though you are likely to require much more training data. You could make an argument that Deep Learning might be better than distance based methods if there are discontinuities in the target function. But in general, my experience is that Deep Learning is used without good justification.
From reading that paper, it doesn’t seem the authors claimed to be doing anything different. The media might as well have said that “Standard Normal lookup tables have done what hundreds of years of calculus could not”. Maybe sorta kinda true in a weird way, but mostly highly misleading.