I am interested in logical and mathematical systems and tools for modelling and reasoning about artificial intelligence.
My current project is on combining statistical and logical methods to understand how language works, so we may enable computers to use it in ways similar to humans.
The statistical methods come from a pragmatic way of thinking about words: based on co-occurence information retrieved from large corpora of data (e.g. books, news, wikipedia). The logical methods, e.g. lambda calculi, type-categorial grammars, generative systems, work along side a principle of compositionality and formalize the grammatical structure of phrases and sentences. Putting the two together enables us to reason about words, phrases, and sentences all at the same time. I hope to mechanize the corresponding computations so we can automate the reasoning. I have worked on applying these methods to tasks such as entailment, disambiguation, paraphrasing. This line of research sheds light on the intricate structure of language and has as well a fine interdisciplinary niche: models I work with are inspired by high level models of quantum mechanics whose resources (e.g. entanglement) have proved invaluable for obtaining better experimental results in language tasks.
Previously, I worked on algebraic and categorical models of multi-agent information flow, resulting in a sound and complete algebraic semantics and a cut-free sequent calculus with adjoint modalities for a positive and an intuitionistic fragment of dynamic epistemic logic. I applied these logics to reason about epistemic scenarios such as learning by communication and navigation.