After a quick Google search, I didn’t get any direct recommendation on versioning IJulia notebooks in Jupyter. But there are suggestions on Stackexchange for IPython notebooks. One of them (top one solution voted) is using a filter to take care of the json files. I wonder if anyone has tried that on the IJulia notebooks or you may be using better strategies?
I use separate repositories for notebooks so that way when their history grows, I can just “restart” them. It’s the path of least resistance. Note that it will bloat the git history and this doesn’t play nice with Julia packages, so I would advise keeping notebooks far away from any registered package.
Then again, I would never use notebooks for any serious coding, and use it only for displaying results. By the time code gets to a notebook, it’s just a few calls from a module that I know already works. That’s probably the best way to handle notebooks.
I use nbconvert with the --execute flag to execute my IJulia notebooks and then save the executed notebook. That way, I can use them as unit tests (and verify that they still work). A side benefit of doing that is that the executed notebook always has the same cell numbers and outputs (if the function is deterministic), which reduces the amount of version control noise.
All of the above sounds good. What if I don’t want to execute the notebook every time I version it? My notebook may need like hours to finish running from the beginning to the end.