We are preparing the 2023 Julia User & Developer Survey. You are invited to review the survey and suggest edits or changes. Please enter your suggestions as comments.
The results of this survey will be presented during JuliaCon.
The draft survey is available at this link:
Do NOT complete the survey at this time. The survey will be open for response after all comments have been reviewed.
Any chance the survey could be conducted using free software instead of google? In particular, @tecosaur developed a solution for conducting the annual emacs user survey written in pure Julia. From my perspective it worked quite nicely.
Not sure of the criteria for inclusion for the favorite packages list, but there has been quite a bit of action around PythonCall.jl, might be interesting to see comparison between it and PyCall.jl.
I was thinking that it might be a good idea to split the question about favorite packages into two, one for standard libraries and one for others. This would make it less likely that some standard library packages are overlooked and give them a bit of a special status which is in line with reality.
In case anyone is actually interested in this, feel free to shoot me a message. The framework is completely generic. There’s just config/survey.jl for setting up the questions, and some donation info in app/resources/surveys/views/thanks.jl.html (configured in app/resources/surveys/SurveysController.jl).
Or maybe even three? Standard Library, Tooling, everything else. Admittedly, there is a bit of a heap paradox problem about what counts as Tooling, but it might make the huge list of packages more readable if we pulled out Transducers.jl, Requires.jl, and maybe even Weave.jl kind of packages that are for making programming easier, faster, more readable vs packages that are intended to extend the functionality of Julia.
If we did this, then we could add the litany of chaining packages to Tooling which might help in @uniment’s crucade to nail down a core chaining syntax.
Nothing will convince you a language suffers the lisp curse faster than seeing a dozen chaining-motivated packages, proposing convergence on a single approach for proper language support, and receiving feedback that it’s undecidable so go make yet another package
Because of the different cultural backgrounds, it would be useful to clarify this question and include an exact JuliaLang example code for calculation, so that everyone can interpret it in the same way.
using Dates
age = Dates.value( Date( now() ) - Date(1999,01,26) ) / 365.2425
Presumably the question asks for the Western chronology age definition, but other age definitions are known.
In an anonymous survey, questions should be formulated such that it is not possible to identify individuals from their answers. So one should avoid questions where only a small number of individuals will give a certain answer, and especially avoid combining many such questions in the same form.
For example, on the etnicity/gender questions it may too specific to have an “other/SPECIFY” option, and maybe just “other” would be better.
For the favorite packages question, it would be nice to have a full list of packages automatically fetched from juliahub or the Julia registry. Ideally, the question would have only one text-field where I can start typing my favourite packages names and it autocompletes if there’s a match in the julia registry. Dont know how hard this would be to implement though.
As B-plan, would be nice to have an “others” section with a textfield, although I guess this would make data post-processing harder
Sadly, It is impossible to do this while still collecting useful demographic data.
For example, for the last few years I have been the only julia user from Australia who lived in the UK.
I think Mose has been the only julia user from Italy in the UK.
But clearly both national origin and current location are interesting and useful questions.
Similar for years of usage.
For a whole lot of countries the first user from that country and when they joined the community is known (or at least determinable from public information).
Practically it is not possible to achieve apriopri kfold anonymity though question design, while keeping the data useful.
Even after the fact you end up elemiately at whole ton of data when you do so.
Because it is a sparse dataset.
For this reason the survey never releases the raw line-data from the survey responses.
Only aggregate stats.
You do have to trust the survey handlers with this data.
If you don’t, I strongly encourage you not to fill the survey in.
Being very much in that community, it is considered problematic to not provide a free text field for gender.
People don’t have to use it, but they should be given the option.
I say this every year but questions of the form
Which is the MOST* … , select all that apply
are semantically nonsensical.
You can not have multiple MOST.