Suppose I have two machines, LAPTOP and SERVER. LAPTOP is resource-limited but convenient, SERVER is powerful but not super-convenient. Both run Linux, I have standard tools (and I can ask for other ones on SERVER, which is not under my control).
My strategy for working on large problems is to develop a “toy” version on my laptop (eg using small simulated data, a subsample of the data, etc), which I can benchmark and test, until it is ready to run. Then I would like to continue working on the server. I may have to go back and forth a few times.
Some of the code that is packaged gets developed in tandem with the application. Some of the code isn’t packaged, but lives in a git repo. The latter is mostly “scripts”, not modules.
I am curious what workflow people use for this. I can
- mount the server directory, and work with that, and
rsyncback and forth,
- sync via private repos on Github or Gitlab (committing the
- use Dropbox (but I would prefer to avoid this as I don’t like it much).
FWIW I am using Emacs, so remote editing is a breeze.