with pleasure I announce here the release 4.0 of Agents.jl: Github repo, Documentation, CHANGELOG.
This new release is a really big one. I’d say about 50% of the library has been either entirely re-written or re-thought at a large degree. Our focus was one the one hand doing large performance upgrades, but most importantly huge simplifications of internal code base as well as user API. Some highlights of this release:
- Continuous and Grid spaces support arbitrary dimensionality and have gained about an order of magnitude performance benefit.
- Easy-to-extend space types: you can write custom spaces by extending 5 functions and then have the entire Agents.jl API work for your new space type.
- Open Street Map space (thanks to @pszufe for support and an initial sketch of the space)
- A buttload of renames of the API for higher clarity (deprecations are in place)
walk-ing functionality for agents
And more. Here is a nice agent simulation (zombie outbreak) on our new space based on Open Street Map
This post is also a good opportunity to announce the new lead developer of Agents.jl, Tim (@Libbum)
Tim has had large influence in discussing design, solving bugs and implementing new features for quite a while now, and is absolutely worthy to take the rains of this project, as I will be stepping down from it to focus more on the remaining JuliaDynamics organization.
Lastly, while preparing this release we did an exhaustive comparison of Agents.jl with 3 competing major ABM software as we prepared a publication for it that we put in arXiv (will update link here). Here is the comparison table:
Our conclusion from this comparison is that Agents.jl outclasses competitors by being simpler to use, having more features overall, and being always faster.
Looks great! I am glad that you made a good use of my map animation code!
Looking forward to update our WIP project with this simplified and more performant framework.
Thank you for your hard work on this project and also congratulations to @Libbum!
We’ve also just put up our paper on the arXiv if you want to check it out!
Interesting. Congrats Tim!
I’ve never done Agent-based models, but I’ve read one or twice people using the commercial environment AnyLogic. Would this fit in your comparison table?
Our choice of comparisons was essentially arbitrary: with caveats. There are at least 20 active, popular ABM frameworks in use today and we needed a set that we had access to documentation, features and a set of standard ABM examples to benchmark against. The frameworks we chose had some, but not all of our requirements covered (MASON does not have two of our benchmark models available as you see).
Commercial software doesn’t lend itself to such analysis too easily; one must have a license for it, and learn enough of its syntax to appropriately compare results. If sample code is not available for models due to the commercial nature of the software, then due to the stochastic nature of ABM output, it’s very difficult to verify you’re comparing like-to-like.
With that being said, if there are AnyLogic experts out there, we would welcome any inclusions to our table and would be happy to work with them on an appropriate set of tests.
While I agree with everything Tim said, I also want to voice my personal opinion: that there isn’t much sense in comparing closed source with open source software. When doing a comparison you want to be able to compare apples with apples. The problem with closed source software is that you will never be able to compare apples with apples, for the simple reason that you cannot prove whether a closed source software is an apple or a pear.
Thanks for your replies. As I said, I never did Agent-based simulations, so I cannot help!
About Datseris’ answer, I think it depends on how well the closed-source software is documented. These days, I’m working on microgrid simulation codes very similar to the commercial simulator HOMER. Although it is closed source, it was spun off an academic lab (NREL) and the method it implements is documented in a 40-page book chapter from its original authors. So comparison of features is easy.
In mathematical optimization, the 2017 journal paper which introduces JuMP.jl is filled with comparisons with the existing commercial environments GAMS and AMPL, because they are so widespread.
Sure, but GAMS et al are optimisation frameworks that are tested against known problems, generally the subset with known local solutions. ABM’s don’t have such luxury, since they are defined by a loose set of rules that attempt to describe complexity as a bottom-up process.
Not to say it isn’t impossible, it’s just highly unlikely documentation helps. For instance, the MASON manual is 375 pages long, which really wasn’t too useful when trying to build the table above. To get decent benchmark comparisons I had to de-compile java binaries, since the source for their examples is not distributed in plain text albeit an ‘open source’ project.
Are the agents asynchronous or multithreaded?
We can’t provide that capacity in the general case, since we need things like synchronous updates over all agents.
We do provide parallel replicates of models, so that you can run one model with different properties, or perhaps a different random seed (this is currently getting fixed).
There’s no reason though why you cannot make agents asynchronous/multithreaded in your own model, provided the logic you require allows it. For example the agent step function is ultimately just iterated over each agent. You could provide a custom step function that uses an async/await syntax. Or perhaps each agent needs to do some heavy work each step: use threads inside its logic etc.
If you have a use case, let us know. Something like this would be helpful to go into the Ecosystem Integration section of our documentation.
Oh, I’ve mixed up Agents.jl and Actors.jl. Now both are interesting, but are different beasts.
Very happy to hear this – it will make an already great package better still!