Projects

Multi-facet Embedding for Language Modeling

Why Multiple Embeddings are Better in LM's Output Softmax Layer

We theoretically show that this single hidden state cannot produce all probability distributions regardless of the language model (LM) size or training data size because the single hidden state embedding cannot be close to the embeddings of all the possible next words simultaneously when there are other interfering word embeddings between them. Our work not only deepens our understanding of softmax bottleneck and mixture of softmax (MoS) but also inspires us to propose multi-facet softmax (MFS) to address the limitations of MoS (Paper).

Predicting the Future Topics for Interactive Language Generation

We design a framework that displays multiple candidate upcoming topics, of which a user can select a subset to guide the generation. Our framework consists of two components: (1) a method that produces a set of candidate topics by predicting the centers of word clusters in the possible continuations, and (2) a text generation model whose output adheres to the chosen topics. The training of both components is self-supervised, using only unlabeled text. Our experiments demonstrate that our topic options are better than those of standard clustering approaches, and our framework often generates fluent sentences related to the chosen topics, as judged by automated metrics and crowdsourced workers (Paper, Code, Talk, Slides, Poster).

Multi-facet Embeddings for Distantly Supervised Relation Extraction

We propose multi-facet universal schema that uses a neural model to represent each sentence pattern as multiple facet embeddings and encourage one of these facet embeddings to be close to that of another sentence pattern if they cooccur with the same entity pair. In our experiments, we demonstrate that multi-facet embeddings significantly outperform their single facet embedding counterpart, compositional universal schema (Verga et al., 2016), in distantly supervised relation extraction tasks. Moreover, we can also use multiple embeddings to detect the entailment relation between two sentence patterns when no manual label is available (Paper, Code, Talk, Slides, Poster).

Predicting Cluster Centers for Sentence Representation

We propose a novel embedding method for a text sequence (e.g., a sentence) where each sequence is represented by a distinct set of multi-mode codebook embeddings to capture different semantic facets of its meaning. The codebook embeddings can be viewed as the cluster centers which summarize the distribution of possibly co-occurring words in a pre-trained word embedding space. Our experiments show that the per-sentence codebook embeddings significantly improve the performances in unsupervised sentence similarity and extractive summarization benchmarks (Paper, Slides, Poster).



Active Learning and Crowdsourcing

Overcoming Practical Issues of Deep Active Learning

Existing deep active learning algorithms achieve impressive sampling efficiency on natural language processing tasks. However, they exhibit several weaknesses in practice, including (a) inability to use uncertainty sampling with black-box models, (b) lack of robustness to noise in labeling, (c) lack of transparency. In response, we propose a transparent batch active sampling framework by estimating the error decay curves of multiple feature-defined subsets of the data. We perform extensive experiments on four named entity recognition (NER) tasks and results show that our methods greatly alleviate these limitations without sacrificing too much sampling efficiency (Paper, Slides, Talk).

Active Sampling for Estimating Quality of Experience (QoE) Model

We use Bayesian learning to model the non-linear relationships between quality of experience (QoE) and multiple factors.

Our experiment shows that active sampling can be used to reduce the number of samples collected from crowdsourcing for building the model, but the users' perception of the video quality would be affected by the active learning methods (Paper).

Student Modeling and Prerequisite Verification in Knowledge Tree

We extract answering logs of the exercises from Junyi Academy, an E-learning website similar to Khan Academy.

We use crowdsourcing and machine learning to discover prerequisite relationships between exercises. Based on that, we design a mechanism of adaptive test to improve the learning experiences of Junyi academy (Paper, Presentation, Demo, Dataset).

Emphasize Uncertain Examples in Supervised Learning

Inspired by active learning, we propose two alternatives to re-weight training samples based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD). Experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques (Paper, Poster).



Natural Language Processing

Distributional Inclusion Vector Embedding for Unsupevised Hypernym Detection

We propose a novel word embedding method that preserves the distributional inclusion property in the sparse-bag-of-word (SBOW) feature. The embedding can be used to predict the generality of words, detect the hypernym relation, and discover the topics from the raw text simultaneously. The extensive experiments show that the embedding effectively compresses the SBOW, and achieves new state-of-the-art performances on the unsupervised hypernym detection tasks (Paper, Code, Demo, Poster). We also show that DIVE could help us to do word sense induction more efficiently (Paper, Slides).

UMASS TAC-KBP 2016 System for Relation Extraction

TAC-KBP is one of the most challenging text-based information retrieval tasks. We integrate research that is done in UMASS IESL in the past year, including embedding linker, multilingual Universal Schema, and LSTM sentence embedding. We perform extensive error analysis and develop some novel techniques (such as using a search engine to reduce noise in training data) to tackle the problems (Paper).


Computer Vision (Unsupervised Clustering and Matching)

Decomposition of Multiple Foreground Co-segmentation

We proposed an efficient algorithm that decomposes the unsupervised Multiple Foreground Co-segmentation problem into three sub-problems: segmentation, matching, and figure-ground classification.

Our method improves the accuracy of the state-of-the-art method by 13% in a standard benchmark (Paper, Code).

Hierarchical Image Segmentation without Training

We proposed a general framework that applies classifiers with different complexity to discriminate segments in an image.

Our unsupervised hierarchical segmentation results achieve similar or better performance in several standard benchmarks compared with the current state-of-the-art methods based on supervised learning (Paper, Poster, Code).

Superpixel-Based Large Displacement Optical Flow

We formulated our objective function at the superpixel level rather than the pixel level as the traditional optical flow method did.

Our method achieves a better large displacement matching capability than LDOF in videos with lower quality (Paper, Poster, Code).