Our Focus

We work on one of the most challenging problems in Artificial Intelligence: teaching machines to understand natural language. We conduct innovative research that drives improvements in our products and publish papers that advance the state-of-the-art.

Our Research Areas

  • block icon block icon

    Natural Language Processing

    We evaluate our models and push the state of the art on traditional tasks such as part-of-speech tagging and dependency parsing, as well as more recent tasks such as stance detection.

  • block icon block icon

    Transfer Learning

    We conduct fundamental research into transfer learning for Natural Language Processing, with a focus on multi-task learning and domain adaptation in order to address the problems of our customers.

  • block icon block icon

    Representation Learning

    At AYLIEN, we are interested in learning meaningful representations of all levels of language, from characters to words to paragraphs and documents.

20 MAY, 2020

Examining the State-of-the-Art in News Timeline Summarization

In this paper, we compare different TLS strategies using appropriate evaluation frameworks, and propose a simple and effective combination of methods that improves over the state-of-the-art on all tested benchmarks. For a more robust evaluation, we also present a new TLS dataset, which is larger and spans longer time periods than previous datasets.

20 MAY, 2020
20 MAY, 2020

20 MAY, 2020

A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal

Multi-document summarization (MDS) aims to compress the content in large document collections into short summaries and has important applications in story clustering for newsfeeds, presentation of search results, and timeline generation.

1 FEB, 2019

Latent Multi-task Architecture Learning

We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15% average error reductions over common approaches to MTL.

1 FEB, 2019

Check out our products

Learn more about where we apply our research