Your current solution seems really fast already. Do you really think that it will be a bottleneck?
As for the question you linked to, that concerns programatically generating variable identifiers, which I think is a bad idea that makes code less readable, not more.
How many variables does your code have, roughly? And is every variable a scalar, or is there more structure to your data? There may be a few different options depending on the particulars of your problem.
At present it varies from 2 - 6. Larger models might be included in future which have more parameters (~10 to 20). Every variable is a scalar, usually Float64.
Looking @DNF’s benchmark, I wonder if I’m spending energy optimizing in the wrong place in this instance. However, I’m still very interested for future reference as it will definitely be useful to know for the type of work I’m doing.
The style you adopted, direct assignment from a single sequentially ordered datatype into distinct, appropriately named, variables is the win: it combines obviousness, clarity for others, and performance.
The current internal setting for the number of args a function may process quickly, without falling back on other machinery, has been increased from ~14 to ~30 or more. Your explicit assignments are unlikely to find that sort of limit.
If possible, could you post any links to other discourse posts / github issues / documentation that you think would help me understand what’s going on ‘under the hood’ that introduces this ~30 args limit?