The main goal of my research is to dramatically increase our ability to mine actionable knowledge from unstructured text. I am especially interested in information extraction from the Web, understanding the connections between people and between organizations, expert finding, social network analysis, and mining the scientific literature & community. Toward this end my group develops and employs various methods in statistical machine learning, natural language processing, information retrieval and data mining---tending toward probabilistic approaches and graphical models.
My current research focuses on box embeddings and structured prediction. Broadly speaking, I am interested in information extraction, program synthesis, deep learning and optimization.
My research focuses on Injecting human knowledge/constraints into neural models, primarily for natural language processing tasks. I am broadly interested in structured prediction, multi-task learning, logical reasoning, and better representation learning for the aforementioned topics.
My current research is on Probabilistic Programming Languages. In particular, the relation between programming languages and gradient-based inference.
My current interests include multi-facet embedding, knowledge representation, textual entailment, relation extraction, and active learning.
My research focus on structured predictions which involve deep learning and probabilistic graphical models. I am also interested in generative models and tasks concern both language model and vision model.
My main research interests lie in information extraction, knowledge representation and reasoning, question answering and machine learning.
I am broadly interested in machine learning and natural language processing. A bit more specifically, I am interested in methods for learning representation of entities, structured prediction models and clustering.
Current interests include neural network models to learn representations of common sense knowledge defined as event frames and event chains.
My research interests are in Natural Language Processing. At the moment, I am working on understanding document-level discourse structure, and applications of this research to the peer review domain.
My current research interests are in automatic structure discovery, instance-based learning, and text generation.
I am interested in time-aware representation learning for both text and KBs. Currently, I working with box embeddings to model relational data.
I am interested in machine learning, natural language processing, learning with limited data, and information extraction.
I am broadly interested in machine learning for NLP with special focus on interpretable and instructable methods that can learn from raw data while being guided/instructed by structured external knowledge/constraints.
My current research involves structured prediction and representation learning for natural language. I also work on efficient deep learning, computer vision, and interdisciplinary applications of algorithms, such as, AI for sustainability.
I am interested in theoretical machine learning and natural language processing, including semantic parsing and information extraction.