I think you’re objecting to “without training” (do not take it to literally), i.e. that would be impossible (and I agree), but I think there “training” is a code-word for (massive) “pre-training” (would likely also apply to “finetuning”) in deep-learning models, and/or “supervised training/learning” (as opposed to un- or semi-supervised), as opposed to “continual learning”:
Adaptive. Whereas today’s AI models are pre-trained on massive amounts of data and then static, Genius Agents learn continuously through experience.
Of course “learning” is/implies some kind of “training”, not just in software, but also in the brain, “train[ing]” is a frequent word in the in the open access Nature Communications paper “Experimental validation of the free-energy principle with in vitro neural networks” (the link to it is in blue in the sentence preceding the one you quoted):
Pharmacological downregulation of gamma-aminobutyric acid (GABA)-ergic inputs (using a GABAA-receptor antagonist, bicuculline) or its upregulation (using a benzodiazepine receptor agonist, diazepam) altered the baseline excitability of neuronal networks. These substances were added to the culture medium before the training period and were therefore present over training.
“training data” also implies “training”, as from the other paper from the professor I quoted:
learning of the dynamics or physics is only possible if data are presented in the order in which they are generated. This means that there is some requisite supervision of structure learning; in the sense that the process generating training data has to respect their ordinal structure.
[Another interpretation of what you wrote, AGI (which they predict still years of) brings on the end of (economic) scarcity.]