Thread
On the highway towards Human-Level AI, Large Language Model is an off-ramp.
To clarify:
LLMs that auto-regressively & reactively predict the next word are an off-ramp. They can neither plan nor reason.
But SSL-pretrained transformers are clearly a component of the solution, within a system that can reason, plan, & learn models of the underlying reality.
LLMs that auto-regressively & reactively predict the next word are an off-ramp. They can neither plan nor reason.
But SSL-pretrained transformers are clearly a component of the solution, within a system that can reason, plan, & learn models of the underlying reality.
My proposal for an architecture that reason, plan, and learn models of reality.
Paper: openreview.net/forum?id=BZ5a1r-kVsf
Talk: www.youtube.com/live/VRzvpV9DZ8Y?feature=share
Paper: openreview.net/forum?id=BZ5a1r-kVsf
Talk: www.youtube.com/live/VRzvpV9DZ8Y?feature=share
Why learning from text is insufficient for intelligence.
www.noemamag.com/ai-and-the-limits-of-language/
www.noemamag.com/ai-and-the-limits-of-language/
But this is not to say that LLMs in their current form are not useful. Or fun.
They are.
They are.
Mentions
See All
Azeem Azhar @azeem
ยท
Feb 4, 2023
Important thread.