We are working on one of the most challenging problems in Artificial Intelligence: teaching machines to understand natural language. We conduct innovative research that drives improvements in our products and publish papers that advance the state-of-the-art.

Areas of Focus
Natural Language Processing

Teaching machines to understand the complexity of human language is one of the central challenges of AI. To push the science forward on this challenge models that perform well for a wide range of NLP tasks. We evaluate our models and push the state of the art on traditional tasks such as part-of-speech tagging and dependency parsing, as well as more recent tasks such as stance detection.

Transfer Learning

The data of every domain and business is different. As Machine Learning models perform worse if they encounter data they have never seen before, models need to be able to adapt to novel data in order to achieve the best performance. At AYLIEN, we conduct fundamental research into transfer learning for Natural Language Processing, with a focus on multi-task learning and domain adaptation in order to address the problems of our customers.

Representation learning

An effective way to create robust models that generalize well is to learn representations that are useful for many tasks. By relying on such representations rather than starting from scratch, we can train models with significantly fewer data. At AYLIEN, we are interested in learning meaningful representations of all levels of language, from characters to words to paragraphs and documents.

Recent Publications
Evaluating the Supervised and Zero-shot Performance of Multi-lingual Translation Models
24 Jun, 2019

We study several methods for full or partial sharing of the decoder parameters of multilingual NMT models. We evaluate both fully supervised and zero-shot translation performance in 110 unique translation directions using only the WMT 2019 shared task parallel datasets for training. We use additional test sets and re-purpose evaluation methods recently used for unsupervised MT in order to evaluate zero-shot translation performance for language pairs where no gold-standard parallel data is available. To our knowledge, this is the largest evaluation of multi-lingual translation yet conducted in terms of the total size of the training data we use, and in terms of the diversity of zero-shot translation pairs we evaluate. We conduct an in-depth evaluation of the translation performance of different models, highlighting the trade-offs between methods of sharing decoder parameters. We find that models which have task-specific decoder parameters outperform models where decoder parameters are fully shared across all tasks.

Latent Multi-task Architecture Learning
1 Feb, 2019
Latent Multi-task Architecture Learning
AAAI 2019, Honolulu, Hawaii

Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)--(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15% average error reductions over common approaches to MTL.

 A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks
1 Feb, 2019

Much effort has been devoted to evaluate whether multi-task learning can be leveraged to learn rich representations that can be used in various Natural Language Processing (NLP) down-stream applications. However, there is still a lack of understanding of the settings in which multi-task learning has a significant effect. In this work, we introduce a hierarchical model trained in a multi-task learning setup on a set of carefully selected semantic tasks. The model is trained in a hierarchical fashion to introduce an inductive bias by supervising a set of low level tasks at the bottom layers of the model and more complex tasks at the top layers of the model. This model achieves state-of-the-art results on a number of tasks, namely Named Entity Recognition, Entity Mention Detection and Relation Extraction without hand-engineered features or external NLP tools like syntactic parsers. The hierarchical training supervision induces a set of shared semantic representations at lower layers of the model. We show that as we move from the bottom to the top layers of the model, the hidden states of the layers tend to represent more complex semantic information.

Off-the-Shelf Unsupervised NMT
6 Nov, 2018

We frame unsupervised machine translation (MT) in the context of multi-task learning (MTL), combining insights from both directions. We leverage off-the-shelf neural MT architectures to train unsupervised MT models with no parallel data and show that such models can achieve reasonably good performance, competitive with models purpose-built for unsupervised MT. Finally, we propose improvements that allow us to apply our models to English-Turkish, a truly low-resource language pair.

Universal Language Model Fine-tuning for Text Classification
14 May, 2018
Universal Language Model Fine-tuning for Text Classification
Proceedings of ACL 2018, Melbourne, Australia

Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a language model. Our method significantly outperforms the state-of-the-art on six text classification tasks, reducing the error by 18-24% on the majority of datasets. Furthermore, with only 100 labeled examples, it matches the performance of training from scratch on 100x more data. We open-source our pretrained models and code.

On the Limitations of Unsupervised Bilingual Dictionary Induction
9 May, 2018
On the Limitations of Unsupervised Bilingual Dictionary Induction
Proceedings of ACL 2018, Melbourne, Australia

Unsupervised machine translation---i.e., not assuming any cross-lingual supervision signal, whether a dictionary, translations, or comparable corpora---seems impossible, but nevertheless, Lample et al. (2018) recently proposed a fully unsupervised machine translation (MT) model. The model relies heavily on an adversarial, unsupervised alignment of word embedding spaces for bilingual dictionary induction (Conneau et al., 2018), which we examine here. Our results identify the limitations of current unsupervised MT: unsupervised bilingual dictionary induction performs much worse on morphologically rich languages that are not dependent marking, when monolingual corpora from different domains or different embedding algorithms are used. We show that a simple trick, exploiting a weak supervision signal from identical words, enables more robust induction, and establish a near-perfect correlation between unsupervised bilingual dictionary induction performance and a previously unexplored graph similarity metric.

Strong Baselines for Neural Semi-supervised Learning under Domain Shift
25 Apr, 2018

Novel neural models have been proposed in recent years for learning under domain shift. Most models, however, only evaluate on a single task, on proprietary datasets, or compare to weak baselines, which makes comparison of models difficult. In this paper, we re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and propose a novel multi-task tri-training method that reduces the time and space complexity of classic tri-training. Extensive experiments on two benchmarks are negative: while our novel method establishes a new state-of-the-art for sentiment analysis, it does not fare consistently the best. More importantly, we arrive at the somewhat surprising conclusion that classic tri-training, with some additions, outperforms the state of the art. We conclude that classic approaches constitute an important and strong baseline.

360° Stance Detection
3 April, 2018
360° Stance Detection
Proceedings of NAACL-HLT 2018: System Demonstrations

The proliferation of fake news and filter bubbles makes it increasingly difficult to form an unbiased, balanced opinion towards a topic. To ameliorate this, we propose 360° Stance Detection, a tool that aggregates news with multiple perspectives on a topic. It presents them on a spectrum ranging from support to opposition, enabling the user to base their opinion on multiple pieces of diverse evidence.

Fine-tuned Language Models for Text Classification
18 Jan, 2018

Transfer learning has revolutionized computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Fine-tuned Language Models (FitLaM), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a state-of-the-art language model. Our method significantly outperforms the state-of-the-art on five text classification tasks, reducing the error by 18-24% on the majority of datasets. We open-source our pretrained models and code to enable adoption by the community.

An Overview of Multi-Task Learning in Deep Neural Networks
Modeling documents with Generative Adversarial Networks
Revisiting the Centroid-Based Method: A Strong Baseline for Multi-Document Summarization
Recent Blog
3 JUL, 2017
A TensorFlow implementation of “A neural autoregressive topic model” (DocNADE)

In this post we give a brief overview of the DocNADE model, and provide a TensorFlow implementation...

17 MAY, 2017
A Call for Research Collaboration at AYLIEN 2017

At Aylien we are using recent advances in Artificial Intelligence to try to understand natural language. Part of what we do is building products such...

13 APR, 2017
Flappy Bird and Evolution Strategies: An Experiment

Having recently read Evolution Strategies as a Scalable Alternative to Reinforcement Learning, Mahdi wanted to run an experiment of his own using Evolution Strategies. Flappy Bird has always been among Mahdi’s favorites...

21 DEC, 2016
Highlights of NIPS 2016: Adversarial Learning, Meta-learning and more

Our researchers at AYLIEN keep abreast of and contribute to the latest developments in the field of Machine Learning. Recently, two of our research scientists, John Glover and Sebastian Ruder, attended NIPS 2016 in Barcelona, Spain...

Looking to Collaborate?

At AYLIEN we are open to collaborating with universities and researchers in related research areas. We frequently host research interns, PhD students and Postdoctoral fellows, and we also collaborate with researchers from other organizations. If your research interests align with ours, please feel free to get in contact with us.