The main goal of my research is to dramatically increase our ability to mine actionable knowledge from unstructured text. I am especially interested in information extraction from the Web, understanding the connections between people and between organizations, expert finding, social network analysis, and mining the scientific literature & community. Toward this end my group develops and employs various methods in statistical machine learning, natural language processing, information retrieval and data mining---tending toward probabilistic approaches and graphical models.
Currently, I'm working on tractable learning and inference algorithms and probabilistic graphical models.
My research interests lie in the areas of structured prediction, machine learning, and natural language processing. I work to make computers understand and generate natural language better (semantic parsing and generation) and to help people who speak different languages communicate (machine translation).
Currently, I'm working on interactive data integration--or combining multiple, heterogeneous sources of data into a single, consistent knowledge base with a human in the loop.
I am broadly interested in machine learning, natural language processing and knowledge base construction. Areas of interest include common sense knowledge, representation learning, structured prediction, coreference, and optimization.
I am interested in developing new machine learning techniques to facilitate fast (and accurate) natural language processing of text. I am very grateful to be supported by an IBM Ph.D. Fellowship Award as of fall 2017.
My research is in Information Extraction, Natural Language Processing, and Machine Learning. In summer 2017 I was a Research Science Intern at the Chan Zuckerberg Initiative working with Meta on constructing bio-medical knowledge graphs.
The focus of my research is on scalable methods for entity resolution, the task of determining the underlying entities referred to by a set of ambiguous mentions. I am interested in two settings of this problem, with and without an existing knowledge base.
I'm generally interested in artificial intelligence, machine learning, and reasoning under uncertainty, especially as it applies human language technology. My research focuses on compact representation of uncertainty, as well as its transmission, measurement, and calibration.
Rajarshi's research interest lies in open domain question answering and in reasoning on large knowledge bases and on anything in the interesection of those two.
My current research is on knowledge representation and reasoning. Broadly, I am interested in deep learning and reinforcement learning.
My current interests include knowledge representation, text entailment, relation extraction, graph embedding, and active learning.
My research focus on structured predictions which involve deep learning and probabilistic graphical models. I am also interested in generative models and tasks concern both language model and vision model.
My research interests involve Natural Language Processing, especially commonsense knowledge reasoning.