Sam Bowman

Talk Title: 
Sentence Understanding with Neural Networks and Natural Language Inference
Event Type: 
Spring 2017
Friday, April 7, 2017, 3:30 pm - 5:00 pm
SBS S218
Artificial neural network models for language understanding problems represent an increasingly large and increasingly successful thread of research within natural language processing. When developing these models in typical settings, though, it can be difficult to identify the degree to which they capture the meanings of natural language sentences, and correspondingly difficult to identify research directions that are likely to yield progress on the underlying language understanding problem.
In this talk, I introduce natural language inference, the task of judging whether one sentence is true or false given that some other sentence is true, and argue that that task is distinctly effective as a means of developing and evaluating sentence understanding models in NLP. In this three sections, I’ll first introduce the task and the Stanford NLI corpus (SNLI, EMNLP ‘15), present the Stack-Augmented Parser-Interpreter Neural Network (SPINN, ACL ‘16), a model developed on that corpus, and finally introduce a new corpus-building effort and shared task competition called MultiNLI.
Sam Bowman recently started as an assistant professor at New York University. Sam is appointed in the Department of Linguistics and the Center for Data Science and is a co-director of the Machine Learning for Language group and the CILVR applied machine learning lab. He completed a PhD in Linguistics in 2016 at Stanford University as a member of the Stanford Natural Language Processing Group, and during that time was a frequent research intern at Google.

Sam's research focuses on building artificial neural network models for natural language processing problems that involve sentence understanding. Sam's 2016 work on deep generative models for text was covered by Quartz under the baffling headline "See the creepy, romantic poetry that came out of a Google AI system."