Research![]()
Commonsense Reasoning: We all share basic knowledge and reasoning ability regarding
causes and effects,
physical commonsense (if you cut the branch you sit on, you will fall),
and social commonsense (it's impolite to comment on someone's weight).
Endowing machine with such commonsense is challenging.
Such knowledge is too vast to collect from humans and often too trivial to be
mentioned in texts. In my work, I developed a model that actively
seeks additional knowledge relevant for a given situation
[1]. |
![]()
Nonmonotonic Reasoning: Everyday causal reasoning requires reasoning about the plausible but potentially defeasible conclusions from incomplete or hypothetical observations. For example, abductive reasoning ("what might explain the current events?"), counterfactual reasoning ("what if?"), and defeasible reasoning
("what might weaken my conclusion?"). I'm working on developing systems capable of nonmonotonic reasoning
for a wide range of situations describable in natural language [2, 3, 4]. |
![]() Lexical and Compositional Semantics: Lexical variability in human language, i.e. the ability to express the same meaning in various ways, is an obstacle for natural language understanding applications. Word representations excel at capturing topical similarity (elevator/floor), as well as functional similarity (elevator/escalator), but they lack the fine-grained distinction of the specific semantic relation between a pair of words. I developed methods for recognizing lexical semantic relations between words and phrases, including ontological relationships e.g., cat is a type of animal, tail is a part of cat [5, 6]; interpreting noun-compounds, e.g. olive oil is oil made of olives while baby oil is oil for babies [7, 8]; and identifying predicate paraphrases, e.g. that X die at Y may have the same meaning as X live until Y in certain contexts [9, 10]. |