Our Focus

We work on one of the most challenging problems in Artificial Intelligence: teaching machines to understand natural language. We conduct innovative research that drives improvements in our products and publish papers that advance the state-of-the-art.

Our Research Areas

20 MAY, 2020

Examining the State-of-the-Art in News Timeline Summarization

In this paper, we compare different TLS strategies using appropriate evaluation frameworks, and propose a simple and effective combination of methods that improves over the state-of-the-art on all tested benchmarks. For a more robust evaluation, we also present a new TLS dataset, which is larger and spans longer time periods than previous datasets.

20 MAY, 2020
20 MAY, 2020

20 MAY, 2020

A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal

Multi-document summarization (MDS) aims to compress the content in large document collections into short summaries and has important applications in story clustering for newsfeeds, presentation of search results, and timeline generation.

1 FEB, 2019

Latent Multi-task Architecture Learning

We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15% average error reductions over common approaches to MTL.

1 FEB, 2019

Check out our products

Learn more about where we apply our research