Bayesian Generalization and Word Learning
The Bayesian generalization framework has been successful
in explaining how people generalize a property from a few
observed stimuli to novel stimuli, across several different
domains. To create a Bayesian generalization
model, modelers typically specify a hypothesis space and
prior probability distribution for each specific domain.
However, this raises two problems: the models do not scale beyond
the (typically small-scale) domain that they were designed
for, and the explanatory power of the models is reduced by
their reliance on a hand-coded hypothesis space and prior. To
solve these two problems, we propose a method for deriving
hypothesis spaces and priors from large online databases. We
evaluate our method by constructing a hypothesis space and
prior for a Bayesian word learning model from WordNet, a
large online database that encodes the semantic relationships
between words as a network. After validating our approach
by replicating a previous word learning study, we apply the
same model to a new experiment featuring three additional
taxonomic domains (clothing, containers, and seats). In
both experiments, we found that the same automatically
constructed hypothesis space explains the complex pattern of
generalization behavior, producing accurate predictions across
a total of six different domains.
We also present a system for learning nouns directly from images,
using probabilistic predictions generated by visual classifiers
as the input to Bayesian word learning, and compare this system
to human performance in an automated, large-scale experiment. The
system captures a significant proportion of the variance in human
responses. Combining the uncertain outputs of the visual classifiers
with the ability to identify an appropriate level of abstraction that
comes from Bayesian word learning allows the system to outperform
alternatives that either cannot deal with visual stimuli or use a
more conventional computer vision approach.
J.T. Abbott, J.L. Austerweil, and T.L. Griffiths. Constructing a hypothesis space
from the Web for large-scale Bayesian word learning. Proceedings of the 34th Annual Conference of the Cognitive
Science Society, 2012.
Y. Jia, J.T. Abbott, J.L. Austerweil, T.L. Griffiths and T. Darrell. Visually-Grounded
Bayesian Word Learning. First International Workshop on Large Scale Visual Recognition
and Retrieval. 26th Annual Conference on Neural Information Processing Systems.
Lake Tahoe, Nevada. December, 2012.
Y. Jia, J.T. Abbott, J.L. Austerweil, T.L. Griffiths and T. Darrell. Visual concept learning: combining
machine vision and Bayesian generalization on concept hierarchies. Advances in Neural Information
Processing Systems 26, 2013.