Non-friendly documentation

Thank you for mentioning the “Edit on Github” that I have missed/skipped a large (countless) number of time while reading the docs. I have learned something.

The fact that I have missed this functionality countless times should probably be interpreted as a personal disorder. Nevertheless, the fact that it has been pointed out countless times to a countless number of different readers may indicate an ergonomic issue. I personally do not consider this (very interesting) functionality to be as accessible as a right-click comment on a google doc.

Anyway I apologize for my ignorance that seems to exasperates you.


Keep in mind that the time of most package maintainers is a scarce resource. There is of course always demand for getting support without reading the docs, or making any kind of effort whatsoever, but most open source projects provide this as a paid service.

While most packages are happy to provide support in issues, at least checking the docs (if they exist) is the minimum courtesy that most open source projects implicitly expect before opening one. It is of course fine if something was missed or unclear in the docs (which is an opportunity for improving them), it’s the intent that counts.

1 Like

I completely agree and the caveat to that statement is that people actually tried to resolve an issue themselves. Perhaps we need to be using issue templates that say some of this as politely as possible. I often don’t have time to really say much more than “have you looked at …”. I’m afraid it may come off as rude but most of the time I would really like to know if someone has because then that documentation isn’t working. If someone hasn’t read the appropriate documentation were they not able to find it? If not then there needs to be better directions to the proper documentation.

1 Like

I agree. I have fallen into your cycle myself.

I started looking into Julia a couple of weeks ago after working quite a bit with QT Creator and other systems. I found the Julia documentation sparse in good, progressive documentation.

And, in some places, items where not documented where I expected to find them. Specific example: I wanted to do a for loop from a to b by step_c.

I know how to do that in C/C++ — “for (i =start; I <= end; I = I + step)”–(simplified for clarity, not good code). And I tried that–no go obviously.

I looked in the Manual/Control Flow/Repeated Evaluation: Loops section and found a number of examples for “for”, but none had the basic format of “for start:step:end”. I finally found a sample in another discussion group.

I like a lot of the things I see in Julia that really impress me and, do things the way I would do them if I was to design a language.

I commend the authors for every thing they have accomplished. I would like to see Julia succeed and wonder how we could go about making improvements in the documentation.

1 Like

Myself, as the majority, has suffer about the problem with non-friendly documentation in some on the packages I work (the official Julia documentation is great, by the way). In my opinion, the edit option in Github is very useful. However, sometimes the improvement should implies new sections in the documentation, and it is not so easy to do that, because it is not so easy to add a new section in the documentation (because it usually would require a new markdown file). Maybe some sections in the documentation to be more complete with the users (as a Example Section or similar) could allow them to improve them more easily (with the edit option in github).

I agree. I have fallen into your cycle myself.

I have too, of course, even though it’s also true that I’ve submitted a lot of documentation PRs for packages I don’t maintain. You certainly don’t have to (can’t) fix everything. To me this is mostly about distributing the expertise & workload in a way that accelerates the improvement of Julia as much as possible. And improving documentation—including harnessing the special gift that only newcomers can bring, which is their perspective as a newcomer—is a huge part of that.

And, in some places, items where not documented where I expected to find them.

When you have a lot to convey, putting the right information in the right spot is a hard (probably ill-defined) problem, though I don’t mean to imply that we can’t do better. To those who might be motivated to pitch in, let me post my new favorite link on writing documentation, shared with me by @Chris_Foster: It’s certainly a model I will follow the next time I sit down for a major documentation project.


I have gotten use to Qt’s documentation and it comes closer to combining example sections with reference sections but it is still a struggle.

One documentation method that I used in the past was IBM’s Keyword In Context. The index of a manual gave all references to a word in manual used in specific contexts.

The manuals where usually structured in a recursive style of Introduction/Body/Technical Details.

When I wanted to look up something, I looked in the index for the keyword and go to references where that keyword was used in a particular context.

Made for very quick look ups.

1 Like

This is a pretty big issue. Sometimes I worry the community may inadvertently reward packages which are under documented out of utility to a seriously limitted population of users(1 out of a billion people can understand them) and not value those with good docs very highly.

I found this to be a gap in learning/getting established in Julia. Sometimes, I found myself writing my own versions of a package rather then figuring out how to use something pre-existing. Because I’ve wasted a lot of time learning someone elses code to find out that it was not complete, or did not do what I needed it to do. Which creates a weird kind of technical debt…

Some people (not common) make incomplete packages to stake a ‘claim’ to an area. Then someone else realizes “hey this is not useful and I don’t even understand the footing its on. So I’ll just make it my way” then we end up with three packages containing the same algorithms which may or may not be documented correctly.

I don’t mean to blow this out of proportion - Docs for packages are markedly improved now days! But, this still exists, and it’s not a “code smell” it’s an “infrastructure/community smell”. I think we need to kind of take steps back and ensure our community isn’t supportive of “solo heros”, “flag planting”, “exclusivity of contributions”, and "in crowd vs others" to grow past some of this. Otherwise we won’t round out as a technology and could fall into some of the traps(inevitable maybe) most communities like this have.

The community overall is good - but we could lose out on serious technical contributors if we aren’t careful to avoid these issues. A sober look on how we treat others contributions, the contributors themselves, and our own contributions, could probably do us all some good.


I’m currently redesigning a lot of documentation after reading that divio link. It’s very useful.

Perhaps we should have a documentation style guide like Invenia’s BlueStyle guide. It would at least be a place to start improving quality of documentation across the community as a whole.


What if we had a documentation coverage github badge option?
How might that work? I can think of a few KPI’s for this - but I’d be curious what other’s think.


I think it’s an interesting idea, but what would it mean exactly. Does it mean what percent of code coverage is due to documentation based tests or does it mean that it adheres to a specific standard?


So how I potentially see this working is the following…

# API User Score
1. Hook into Documenter.jl to find all documented functions: fn_documented
2. get all Functions that are exported, as fn_exp
3. Find cardinality of the intersection of 1&2
4. Score = ( intersection / |fn_exp|)

# API Contributor Score
1. Hook into Documenter.jl to find all documented functions: fn_documented
2. get all Functions that are exported, as fn_exp
3. Find cardinality of the intersection of 1&2
3. Find cardinality of the union of 1&2
4. Score = ( intersection / union)

# Example's/tutorials score
1. get all Functions that are exported, as fn_exp
2. For each function
2.A. Hook into Documenter.jl to find all code-blocks.
2.B. counts = does codeblock contain a call to fn_exp? 1 : 0
3. Score = counts / fn_exp

Surely we could get more complicated than this too, but this seems feasible.

1 Like

@anon92994695, an interesting thought! We have the discussion here again and again in intervals. There seems to be a real need and this should not always be put down with the same arguments (do a PR, get involved, etc.). I had expressed the idea of KPI’s some time ago and the resonance speaks for itself…

1 Like

I think it might be better to start with documentation style guides and instructions first. It’s difficult to create a scoring system for something that might mean something different to each of us.

1 Like

I think good documentation is inherently hard thing to achieve. Not because of lack of trying or attention. But because fundamental mechanism of how humans can maintain focus and be good at something.

I’m working as full time game designer. And learned in my career that good writer is far rarer than good programmer or good artist. And writing is not necessary cognitively more demanding skill than programming. But It’s nature makes writing is much harder to learn and hone as you write.

For anyone to learn anything, one needs a feedback. And feedback from others are rare and far between. One need a feedback from oneself, whether one did a good job or not. When one is coding, he can set goal all by himself, and measure his achievement quite objectively. And he can keep doing this until his achievement satisfices his own criteria of a goal. And feel good about himself.

But when one writes technical documentation or creative writing. One cannot do this. Did he produced many pages? well yeah, but was it good documentation? one can’t know it by oneself.

And that’s why writing documentation is not engaging task. And harder to draw energy for writing than coding. Because one cannot feel satisfaction like passing test script with a writing.


Nice! yes I think KPI’s or statistic based figures of merit could help us all become more aware of the holes in the ecosystem. As an aside - It may be great to have the fastest package for doing X_Y_Z in the world, but if it’s constantly broken, and all of the nonbenchmarked parts of the codebase change syntax every 2 weeks - well a user might want to know that before selecting it as a tool for a project. Especially if it’s their introduction to Julia. KPI’s can offer glimpses of that.

Not to say we have a lot of false promises here, but I think if we honestly look at some of our deficits from a user’s perspective, we may change what we do with our contributions and how we maintain them.

We have been encouraged to cut 1.0 releases (probably so people on the outside felt Julia’s ecosystem was more mature), but what if we were encouraged to package maintainers to get >80% on their documentation coverage (should be pretty easy with doc strings and a single example)? Which would make the greater impact long term? I think the docs, but I may be wrong.

1 Like

A really common complaint I’ve seen is underdocumented wrapper packages. No explaniation of what the package it’s wrapping does, how the syntax might differ, what functionality is missing, will some functionality never be achievable with the architecture, etc.

The reason why I am not always in favor of the “submit a PR approach” with all cases of bad documentations is the following: Not a lot of people want to sign up to learn a codebase with no comments/docs, to be reading someone elses documentation in another language, to decide if the technology is what they want, then try the package in the primary language, try to figure out how to use the package to use the new technology, then fill in the docs for someone elses effort which may or may not change at whim or go unsupported.

It’s alot like licking the frosting off of a bunch of cupcakes infront of guests and then being surprised that no one wants desert. It’s true you made everyone cupcakes and that’s awesome! But, it’s not exactly polite or courteous to others or really ones future self. Some will be so hungry they will eat around the tongue marks, but most people aren’t accustomed to hosts behaving that way.

We may be solving the 2 language problem, but a lot of new users would probably prefer to just dart to the more “supported” package if you don’t have examples of your package in your language. So that’s one kind of rough example, but I think it’s an important area to hit. When people scope languages they scope if technologies best suited to their problems are supported and usable.

Do KPI’s solve everything? Not at all - but they can at least make obvious some deficits in what’s available. Show contributors where they can focus efforts, and remind authors “hey there’s a big hole here”.


Nice discussion. Lots of good ideas.

One I haven’t seen mentioned yet: writing documentation helps the programmer develop better software.

Typical anecdote: programmer writes software. Then the programmer starts writing a how-to guide, and realizes that it is hard to explain why things are done a certain way. A bit of thinking, and the programmer realizes that ideas implemented in the software are too complex and could be re-thought, redesigned, reimplemented.

The value of documentation writing is the distance from the actual code that it creates by its “not being code”. That may lead to useful insights. And better package.


Not sure what the overlap is with Documenter.jl, but I just want to point out GoDoc - I’ve generally found go documentation to be quite easy to find and read. I think the fact that it’s all hosted in one place and easy to upload is helpful. And the autogenerated API reference. Of course, the zillions of Google developer-hours doesn’t hurt :wink:

1 Like

Being a relative newbie to Julia, I started looking into Documenter.jl with the idea in mind of adding or annexing a KWIC index builder. I think I’m going to do some more research on KWIC and some trial and error on processing an .md file into a sqlite database for building/maintaining a KWIC index.

Any thoughts or ideas would be appreciated.