How to measure energy consumption of computations?

Developing models and running simulations is a core activity of our research department. Faster languages (Julia) and better algorithms result typically not in less computation, but in more ambitious targets. During a discussion about how we can minimize our environmental footprint, we realized that we do not have an idea how much energy our simulations require.

I was therefore wondering if there is a easy way to measure the energy consumption of long running computations? It doesn’t need to very accurate.

The closest I found was pyJoules “to measure the energy footprint of a host machine along the execution of a piece of Python code.”

Maybe powertop (a Linux tool) could be used, but did not figure out how the measure the energy consumption of a single process.

Thanks for any hints!


You could measure the electrical energy consumed by the machine you’re running the simulation on (using simple hardware), it’s going to be far more accurate than a software solution.


I once saw a paper which compared energy efficiency of different languages, maybe you can dig it up.

Measuring energy consumption of your hardware is trivial when running your simulations locally, but impossible for cloud based calculations (unless the cloud provider offers it).
Other aspects that should be considered:

  • Where does the energy come from? Renewables or coal?
  • Environmental impact of your hardware production
1 Like

Thanks for your suggestions!

Measuring the electricity consumption directly seems like the best option for non-cloud computations. Maybe one should subtract the baseline consumption if the computer is running anyway.

Life-cycle costs for hardware should be considered indeed. I’m sure we can find studies about this (with all the related uncertainties).

Using renewable energy is not free pass given the still limited supply. If I use more renewables, my neighbor can use less.

Here the paper I was referring to above being discussed:

1 Like

For comparison of various alternatives, I would just keep it simple and use runtime as a proxy (on the same hardware). If you have reasonable core utilization, it will be fairly accurate.

If you really want energy, measure the consumption of that hardware running a typical load for a specific duration of time and convert.

1 Like

What I’m really curious is the environmental footprint of running zilions of CI tests, that might be quite heavy, for trivial changes, that can be as useless as running them when only document parts have been changed.