I was scanning the paper, i.e. https://www.sciencedirect.com/science/article/pii/S0004370221000862
- Reward is enough
3.3. Reward is enough for social intelligence
3.4. Reward is enough for language
3.5. Reward is enough for generalisation
3.7. Reward is enough for general intelligence
[Note, while the paper doesn’t menion “neural network” or “deep learning”, both are in references, so I assume as with AlphaGo, RL plus neural network may be assumed.]
’
so while I’m not sure I believe Google’s Deepmind paper, it’s a forward-looking statement, I’m curious what Julia’s status is vs. e.g. Python for reinforcement learning. I know Deepmind’s AlphaGo has been reimplemented, and while Julia ecosystem is playing catch-up with Python for neural networks, maybe for RL it’s faster?
Also for other nontraditional AI, Deep kernel learning (and Gaussian processes), are we ahead?
Symbolic AI (only) was claimed to be a dead-end years ago, and it seems, neural networks (sub-symbolic) are too, at least traditional ones. But not hybrid of the both. Python has “Logic Tensor Networks” which seem interesting, and:
the following Neural Process variants:
- Conditional Neural Processes (CNPs)
- Neural Processes (NPs)
- Attentive Neural Processes (ANPs)
The code for CNPs can be found in
conditional_neural_process.ipynb
while the code for both NPs and ANPs is located inattentive_neural_process.ipynb
.
[…] further details can be found in the CNP paper, the NP paper and the ANP paper.
The Promises and Pitfalls of Deep Kernel Learning
Deep kernel learning and related techniques promise to combine the representational power of neural networks with the reliable uncertainty estimates of Gaussian processes. […]
we find that a fully Bayesian treatment of deep kernel learning can rectify this overfitting and obtain the desired performance improvements over standard neural networks and Gaussian processes.
[…]
Neurosymbolic AI: The 3rd Wave
Keywords: Neurosymbolic Computing; Machine Learning and Reasoning; Explainable AI; AI Fast and Slow; Deep Learning.
Despite the impressive results, deep learning has been criticised for brittleness (being susceptible to adversarial attacks), lack of explainability (not having a formally defined computational semantics or even intuitive explanation, leading to questions around the trustworthiness of AI systems), and lack of parsimony (requiring far too much data,computational power at training time or unacceptable levels of energy consumption) [52]
[…]
The need for a better understanding of the underlying principles of AI has become generally accepted. A key question however is that of identifying the necessary and sufficient building blocks of AI, and how systems that evolve automatically based on machine learning can be developed and analysed in effective ways that make AI trustworthy.
[…]
The current limits of neural networks as essentially a propositional system are also evaluated. In a nutshell, current neural networks are capable of representing propositional logic, nonmonotonic logic programming, propositional modal logic and fragments of first-order logic, but not full first-order or higher-order logic. This limitation has prompted the recent work in the area of Logic Tensor Networks (LTN) [79, 53, 95] which, in order to use the language of full first-order logic with deep learning, translates logical statements into the loss function rather than into the network architecture
[…]
In a nutshell, we claim that neurosymbolic AI is well placed to address concerns of computational efficiency, modularity, KR + ML and even causal inference. More researchers than ever on both sides of the connectionist-symbolic AI divide are now open to studying and learning about each others’ tools and techniques. This was not the case until very recently.
[…]
Symbolism has been expected to provide additional knowledge in the form of constraints for learning [23, 32], which ameliorate neural network’s well-known catastrophic forgetting or difficulty with extrapolation in unbounded domains or with out-of-distribution data. The integration of neural models with logic-based symbolism is expected there-fore to provide an AI system capable of explainability, transfer learning and a bridge between lower-level information processing (for efficient perception and pattern recognition) and higher-level abstract knowledge (for reasoning, extrapolation and planning).
[…]
Henry Kautz’s taxonomy for neurosymbolic AI [42], which was introduced at AAAI 2020: In Kautz’s taxonomy, a Type 1 neural-symbolic integration is standard deep learning, which some may argue is a stretch
[…]
Type 2 are hybrid systems such as DeepMind’s AlphaGo and other systems where the core neural network is loosely-coupled with a symbolic problem solver such as Monte Carlo tree search.
[…]
Type 5 are those tightly-coupled but distributed neural-symbolic systems where a symbolic logic rule is mapped onto an embedding which acts as a soft-constraint (a regularizer) on the network’s loss function. Examples of these include Logic Tensor Networks [79] and Tensor Product Representations [39], referred to in [13] as tensorization methods. Finally, a Type 6 system should be capable, according to Kautz, of true symbolic reasoning inside a neural engine. This is what one could refer to as a fully-integrated system. Early work in neural-symbolic computing has achieved this (see [20] for a historical overview).
The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence
The problem is, even with massive amounts of data, and new architectures, such as the Transformer (Vaswanietal.,2017), which underlies GPT-2 (Radfordetal.,2019), the knowledge gathered by contemporary neural networks remains spotty and pointillistic, arguably useful and certainly impressive, but never reliable (Marcus, 2020).
We show how Real Logic can be implemented in deep Tensor Neural Networks with the use of Google’s tensorflow primitives.