Cognitive and computational building blocks for more human-like language in machines

10 December 2020, Version 1
This content is an early or alternative research output and has not been peer-reviewed by Cambridge University Press at the time of posting.

Abstract

Humans learn language building on more basic conceptual and computational resources that we can already see precursors of in infancy. These include capacities for causal reasoning, symbolic rule formation, rapid abstraction, and common sense representations of events in terms of objects, agents and their interactions. I will talk about steps towards capturing these abilities in engineering terms, using tools from hierarchical Bayesian models, probabilistic programs, program induction, and neuro-symbolic architectures. I will show examples of how these tools have been applied in both cognitive science and AI contexts, and point to ways they might be useful in building more human-like language, learning and reasoning in machines.

Keywords

Cognitive building blocks
Computational building blocks
human-like language in machines
hierarchical Bayesian models
probabilistic programs
program induction
neuro-symbolic architecture
cognitive science
AI

Supplementary weblinks

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting and Discussion Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.