Unsupervised Learning of probabilistic structured models presents a fundamental tradeoff between richness of captured constraints and correlations versus efficiency and tractability of inference. In this thesis, we propose a new learning framework called Posterior Regularization that incorporates side-information into unsupervised estimation in the form of constraints on the model’s posteriors. The underlying model remains unchanged, but the learning method changes. During learning, our method is similar to the EM algorithm, but we solve a problem similar to Maximum Entropy inside the E-Step to enforce the constraints. We apply the PR framework to two different large scale tasks: Statistical Word Alignments and Unsupervised Part of Speech Induction. In the former, we incorporate two constraints: bijectivity and symmetry. Training using these constraints produces a significant boost in performance as measured by both precision and recall against manually annotated alignments for six language pairs. In the latter we enforce sparsity on the word tag distribution which is overestimated using the default training method. Experiments on six languages achieve dramatic improvements over state-of-the-art results.
Posterior Regularization Framework: Learning Tractable Models with Intractable Constraints
June 22, 2010
1:00 pm