Research | AYLIEN

Research

We are working on one of the most challenging problems in Artificial Intelligence: teaching machines to understand natural language. We conduct innovative research that drives improvements in our products and publish papers that advance the state-of-the-art.

Areas of Focus
Natural Language Processing

Teaching machines to understand the complexity of human language is one of the central challenges of AI. To push the science forward on this challenge models that perform well for a wide range of NLP tasks. We evaluate our models and push the state of the art on traditional tasks such as part-of-speech tagging and dependency parsing, as well as more recent tasks such as stance detection.

Transfer Learning

The data of every domain and business is different. As Machine Learning models perform worse if they encounter data they have never seen before, models need to be able to adapt to novel data in order to achieve the best performance. At AYLIEN, we conduct fundamental research into transfer learning for Natural Language Processing, with a focus on multi-task learning and domain adaptation in order to address the problems of our customers.

Representation learning

An effective way to create robust models that generalize well is to learn representations that are useful for many tasks. By relying on such representations rather than starting from scratch, we can train models with significantly fewer data. At AYLIEN, we are interested in learning meaningful representations of all levels of language, from characters to words to paragraphs and documents.

Recent Publications
360° Stance Detection
3 April, 2018
360° Stance Detection
Proceedings of NAACL-HLT 2018: System Demonstrations

The proliferation of fake news and filter bubbles makes it increasingly difficult to form an unbiased, balanced opinion towards a topic. To ameliorate this, we propose 360° Stance Detection, a tool that aggregates news with multiple perspectives on a topic. It presents them on a spectrum ranging from support to opposition, enabling the user to base their opinion on multiple pieces of diverse evidence.

Fine-tuned Language Models for Text Classification
18 Jan, 2018

Transfer learning has revolutionized computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Fine-tuned Language Models (FitLaM), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a state-of-the-art language model. Our method significantly outperforms the state-of-the-art on five text classification tasks, reducing the error by 18-24% on the majority of datasets. We open-source our pretrained models and code to enable adoption by the community.

An Overview of Multi-Task Learning in Deep Neural Networks
Modeling documents with Generative Adversarial Networks
Revisiting the Centroid-Based Method: A Strong Baseline for Multi-Document Summarization
Recent Blog Posts
3 JUL, 2017
A TensorFlow implementation of “A neural autoregressive topic model” (DocNADE)

In this post we give a brief overview of the DocNADE model, and provide a TensorFlow implementation...

17 MAY, 2017
A Call for Research Collaboration at AYLIEN 2017

At Aylien we are using recent advances in Artificial Intelligence to try to understand natural language. Part of what we do is building products such...

13 APR, 2017
Flappy Bird and Evolution Strategies: An Experiment

Having recently read Evolution Strategies as a Scalable Alternative to Reinforcement Learning, Mahdi wanted to run an experiment of his own using Evolution Strategies. Flappy Bird has always been among Mahdi’s favorites...

21 DEC, 2016
Highlights of NIPS 2016: Adversarial Learning, Meta-learning and more

Our researchers at AYLIEN keep abreast of and contribute to the latest developments in the field of Machine Learning. Recently, two of our research scientists, John Glover and Sebastian Ruder, attended NIPS 2016 in Barcelona, Spain...

Looking to Collaborate?

At AYLIEN we are open to collaborating with universities and researchers in related research areas. We frequently host research interns, PhD students and Postdoctoral fellows, and we also collaborate with researchers from other organizations. If your research interests align with ours, please feel free to get in contact with us.