Well, no. The source is taking about using automated methods to assess gender in a situation where it would be just as easy (and more accurate) to just ask people their gender, with appropriate options for non-binary (or better, make it free-text field so that your not imposing any priors on people’s gender expression). I’m 100% in agreement on that.
I’m back too, from same thread! I don’t mean to derail that broader discussion here or rehash what’s over there, but are you referring to this piece? https://ironholds.org/names-gender/ If so, you and I must have read it quite differently- in my reading, the author’s argument is not that it’s unethical to do merely in an in-person or similar setting when you can simply ask for this information in a different way, but that (my paraphrase) predicting gender from given names is unethical to begin with, because setting up the task that way in the first place implicitly bakes in very specific political ideas about gender that themselves are unavoidably racist and so on. That’s not application-specific, to my mind.
Looking that thread again, it looks like that package was intended for making binary gender predictions for authors of academic work, so here’s a recent (short) paper on that specific use-case, if you (or anyone else) would like to take a look. https://www.frontiersin.org/articles/10.3389/fdata.2019.00029/full