Latent Variables


Standard statistical approaches to many natural language processing are based on linear models. But linear models are only as good as the underlying feature representation -- if a model designer omits an important predictive feature, a learning algorithm will not be able to discover statistical dependencies potentially yielding low predictive accuracy of the induced model. This makes models expensive to create and difficult to port to new, even similar domains (i.e., languages, genres, topics). We work on developing latent variable models which automatically induce predictive features by modeling interaction between elementary features and thus minimizing the amount of unnecessary feature engineering and resulting in more portable models.

See our research on latent variable models for syntactic parsing and semantic analysis .

We also considered techniques which aim to reduce domain-dependence of the model by inducing common (domain independent or, more realistically, less domain dependent) feature representation.

Selected relevant publications:

keywords: latent variable models, distributed representations, deep learning