Program

(Slides are now available, also see abstracts below)

8.30 – 9.00: Coffee

9.00-9.30: Introduction to the Workshop,  Ivan Titov  (University of Amsterdam) [slides]
9.30- 10.30: Inducing Semantics with Minimal or No Supervision,  Dan Roth (University of Illinois at Urbana-Champaign) [slides]

 

10.30 – 11.00:  Coffee break

11.00-12.00: Complex Embeddings for Simple Link Prediction, Guillaume Bouchard (University College London)  [slides]

12.00-12.30: Inside-Outside Neural Chart Parsing, Phong Le and Jelle Zuidema (University of Amsterdam) [slides]

12:30- 14.00:  Lunch break

14.00-15.00: Language to Logical Form with Neural Attention, Mirella Lapata (University of Edinburgh) [slides]
15.00-16.00: Semantic Abstraction at Scale: Opportunities and Challenges for NLP with the FrameNet and AMR Representations, Nathan Schneider (Georgetown University) [slides]

16.00-16.30: Joint Unsupervised Induction and Factorization of Relations from Text, Diego Marcheggiani (University of Amsterdam)  [slides]

16.30 –  18.30: Poster session (with snacks / drinks)

Abstracts

Inducing Semantics with Minimal or No Supervision,  Dan Roth (University of Illinois at Urbana-Champaign)

 

The fundamental issue underlying natural language understanding is that of semantics – there is a need to move toward understanding the text at an appropriate level of abstraction, beyond the word level, in order to support access, knowledge extraction and communication. Machine Learning and Inference methods have become ubiquitous in our attempt to induce semantic representations of natural language and support decisions that depend on it. However, learning models for these tasks is difficult partly since generating supervision signals for it is costly and does not scale.

I will describe some of our research on learning semantics with indirect and incidental supervision. From “response based” learning of models, that support inducing semantic representations simply by observing the models’ behavior, to models that identify and classify semantic predicates by exploiting incidental supervision signals that exist in the data, independently of the task at end. I will exemplify this approach with results on multilingual dataless categorization of text and events, Wikification, and semantic parsing.


Complex Embeddings for Simple Link PredictionGuillaume Bouchard (University College London)

 

In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, we make use here of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to  state-of-the-art models such as the Neural Tensor Model and Holistic Embeddings, our approach based on *complex* embeddings is arguably *simpler*, as it only uses the Hermitian dot product which is the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.


Inside-Outside Neural Chart ParsingPhong Le and Jelle Zuidema (University of Amsterdam)

 

The recursive neural network (RNN) is a popular and successful model for a variety of semantic and syntactic parsing tasks, but it processes parse trees in a bottom-up manner and it can only process a single tree. This makes it unable to compute representations for the contexts of phrases in the tree and, in the typical setup, dependent on third-party parsers and vulnerable to uncertainty about the correct parse. In our earlier work, we therefore proposed the Inside-Outside Recursive Neural Network, which adds context representations and a top-down flow of information (leading to state of the art results in dependency parsing), and the Forest Convolutional Network, which processes an entire forest of parse trees rather than a single one (leading to state of the art results in sentiment analysis). In this talk, we propose a generalization and combination of those two models, yielding the Inside-Outside Neural Chart Parser. The key idea is to have two vectors at each cell in a chart: the inside vector for representing the phrase that the cell covers and the outside vector for representing the context around the phrase. Using the same mechanism as in the inside-outside algorithm, our framework can handle all possible parses efficiently.


Language to Logical Form with Neural Attention, Mirella Lapata (University of Edinburgh)

 

Semantic parsing aims at mapping natural language to machine interpretable meaning representations. Traditional approaches rely on high-quality lexicons, manually-built templates, and linguistic features which are either domain- or representation-specific.  In this paper we present a general method based on an attention-enhanced encoder-decoder model.  We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors.  Experimental results on four datasets show that our approach performs competitively
without using hand-engineered features and is easy to adapt across domains and meaning representations.


Semantic Abstraction at Scale: Opportunities and Challenges for NLP with the FrameNet and AMR RepresentationsNathan Schneider (Georgetown University)

In a language such as English, superficially different sentences may express the same idea, while syntactically similar sentences can express completely different ideas. For example, “James snapped a photo of me with Sheila” and “Sheila and I had our picture taken by James” are much closer in meaning than the words and syntax would suggest. NLP is increasingly turning towards linguistically-based semantic representations to achieve a more robust level of abstraction in support of language technologies such as question answering, text summarization, and machine translation.

FrameNet and AMR offer two sets of resources and conventions for abstracting away from particular words and syntactic relations. The two approaches seek to represent some of the same core semantic phenomena—primarily, relations between predicates and their arguments in a sentence. Yet the particulars of these representations reflect different motivations, enable different kinds of meaning inferences, and present different challenges for data annotation, machine learning, and structured prediction. I will give an overview of these representations, the linguistic resources in which they are instantiated, and techniques by which they are semantically parsed. Finally, I will comment on the limitations of each as a general-purpose representation, and on the prospects for bringing them together to learn richer and more robust semantic parsers.


Joint Unsupervised Induction and Factorization of Relations from TextDiego Marcheggiani  (University of Amsterdam)

 

The task of Relation Extraction (RE) consists of detecting and classifying the semantic relations present in text.  RE has been shown to benefit a wide range of NLP tasks, such as information retrieval, question answering and textual entailment.  Supervised methods for RE have been successful when small restricted sets of relations are considered. However, human annotation is expensive and time-consuming, and consequently these  approaches do not scale well to the open-domain setting (e.g., the entire Web). These limitations led to the emergence of unsupervised approaches for RE.  Existing unsupervised methods for RE extract surface or syntactic patterns between two entities and either directly use these patterns as substitutes for semantic relations or cluster the patterns to form relations. These methods rely on simpler features than their supervised counterparts and also make strong modeling assumptions (e.g., assuming that arguments are conditionally independent of each other given the relation). These shortcomings are likely to harm their performance.

In this talk, I will present a novel feature-rich model for unsupervised relation extraction. The model is composed of two components, an encoding component, namely a feature-rich relation extractor which predicts a semantic relation between two entities in a sentence; and a reconstruction component, namely a factorization model which reconstructs arguments (i.e., the entities) relying on the predicted relation. The general idea is to force the semantic representation to be useful for the most basic form of semantic inference (i.e., inferring an argument based on the relation and the other argument). I will show that this approach obtains state-of-the-art results on the relation discovery task and outperforms generative and agglomerative clustering baselines.