At $Work I’m wrestling with windows installer frameworks - somehow, even the provided examples don’t want to work… Maybe I’m luckier tomorrow.
At $Home I’m continuing with what I call IOLogging.jl (not yet available) - a simple IOLogging package providing a general IOLogger (where the user is responsible for flushing and closing the IO) and a FileLogger (where differing LogLevels can be redirected to different files - flushing and closing of the files is taken care of). Just some more tests and documentation and it should be good to go
I’m submitting two papers today/tomorrow. But once that’s done, I’m updating ModelingToolkit.jl so that way it can be faster and internally utilize Julia ASTs (this is the next generation DiffEq DSL). Then I hope to help with sensitivity analysis in parameter estimation and associated benchmarks, probably look at the QNDF and QBDF benchmarks, and update PuMaS.jl to get ready for its release.
Porting a mid-size cpp project to Julia. There have already been a few cases where Julia allows for more readable code at negligible cost, hopefully I can find more!
For work I’m getting the code in place for running user-defined cellular automata/dispersal kernel simulations on GPUs in Cellular.jl and Dispersal.jl, and writing an implementation of a processor-intensive dispersal model for a paper.
My thesis is less straight forward but looking at plant growth model parametrization…
Working on coding a very complex Bayesian model, and in the meantime developing a two new packages to cooperate with DynamicHMC.jl: one that makes coding custom domain transformations (especially log Jacobian determinants) less painful, the other that allows working with a wider array of AD tools to use derivative-based MCMC methods.
At work I’m making a dashboard to show how a model performs in production. As part of this I’ve been writing a Julia wrapper for the BigQuery command line tool so I can run queries and download the resulting tables. By transferring the table via a Cloud Storage bucket I’ve been able to get the results onto disk sooooo much faster than the package I used to use in R, which instead paged painfully slowly through the results from the REST interface. I will probably have to build the front bit of the dashboard in something like R’s Shiny, but I hope that one day I can do that in Julia too.
At home I’m working on corrections for my thesis, which has been a part-time saga for a while now (major corrections + no deadline + full-time job that doesn’t require the PhD to be actually awarded = little motivation ) I’m almost done with the bit where I need to actually alter code. The current hurdle is just updating things for the multiple releases Plots.jl has had since I made the plots the first time around!
I recently got really bored with trying to keep track of which parameter sets belong to which model. Should I store them in different files? What should I name the variables? Am I sure that ‘this’ particular parameter set was optimised towards ‘this’ particular model? What data was used during the optimisation?
So now I’m writing a new package to relieve me of these woes.
That’s very interesting. I have tried to wrap another command line tool, but I was not very successful.
I would be very interested to see how you manage all the options and arguments.
Finished porting our network inference package (FlashWeave.jl) to Julia 1.0, which went much more smoothly than expected! (big thanks to FemtoCleaner and the great tips from Upgradathon Fridays)
Otherwise, wrapping my head around GPUarrays and being amazed by what is already possible with them
Going through the applied predictive modeling book and tried to run the same sample code in Julia instead of R. Realized that Box Cox Transformation isn’t available in any Julia package so created the package myself. Now struggling whether I should continue building other transformations eg. Yeo Johnson.