Research | AYLIEN


We are working on one of the most challenging problems in Artificial Intelligence: teaching machines to understand natural language. We conduct innovative research that drives improvements in our products and publish papers that advance the state-of-the-art.

Areas of Focus
Natural Language Processing

Teaching machines to understand the complexity of human language is one of the central challenges of AI. To push the science forward on this challenge models that perform well for a wide range of NLP tasks. We evaluate our models and push the state of the art on traditional tasks such as part-of-speech tagging and dependency parsing, as well as more recent tasks such as stance detection.

Transfer Learning

The data of every domain and business is different. As Machine Learning models perform worse if they encounter data they have never seen before, models need to be able to adapt to novel data in order to achieve the best performance. At AYLIEN, we conduct fundamental research into transfer learning for Natural Language Processing, with a focus on multi-task learning and domain adaptation in order to address the problems of our customers.

Representation learning

An effective way to create robust models that generalize well is to learn representations that are useful for many tasks. By relying on such representations rather than starting from scratch, we can train models with significantly fewer data. At AYLIEN, we are interested in learning meaningful representations of all levels of language, from characters to words to paragraphs and documents.

Recent Publications
25 Aug 2017
Learning to select data for transfer learning with Bayesian Optimization (EMNLP 2017)

Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. We learn model-agnostic data selection measures for transfer learning. Our learned measures outperform existing domain similarity measures and even a state-of-the-art domain adaptation approach on sentiment analysis, part- of-speech tagging, and dependency parsing.

Recent Blog Posts
3 July, 2017
A TensorFlow implementation of “A neural autoregressive topic model” (DocNADE)

In this post we give a brief overview of the DocNADE model, and provide a TensorFlow implementation...

17 May, 2017
A Call for Research Collaboration at AYLIEN 2017

At Aylien we are using recent advances in Artificial Intelligence to try to understand natural language. Part of what we do is building products such...

13 April, 2017
Flappy Bird and Evolution Strategies: An Experiment

Having recently read Evolution Strategies as a Scalable Alternative to Reinforcement Learning, Mahdi wanted to run an experiment of his own using Evolution Strategies. Flappy Bird has always been among Mahdi’s favorites...

21 December, 2016
Highlights of NIPS 2016: Adversarial Learning, Meta-learning and more

Our researchers at AYLIEN keep abreast of and contribute to the latest developments in the field of Machine Learning. Recently, two of our research scientists, John Glover and Sebastian Ruder, attended NIPS 2016 in Barcelona, Spain...

Looking to Collaborate?

At AYLIEN we are open to collaborating with universities and researchers in related research areas. We frequently host research interns, PhD students and Postdoctoral fellows, and we also collaborate with researchers from other organizations. If your research interests align with ours, please feel free to get in contact with us.