Research
My interests range from natural language understanding, to robotics, continuum mechanics, and solid state physics. Underlying these threads are problems rooted in machine learning, statistics, and optimization. Below is a representative sample of research I've been involed with in the past.


Adversarial Bootstrapping for Dialogue Model Training.
Oluwatobi Olabiyi,
Erik Mueller,
Chris Larson,
Tarek Lahlou
AAAI Workshop on Reasoning and Learning for HumanMachine Dialogues (DEEP–DIAL).
Feb 8, 2020, New York, USA.
arxiv /
bibtex
This paper proposes bootstrapping a dialogue response generator with an adversarially trained discriminator to address exposure bias, improving response relevance and coherence. The method involves training a neural generator in both autoregressive and traditional teacherforcing modes, with the maximum likelihood loss of the autoregressive outputs weighted by the score from a metricbased discriminator model.


Telephonetic: Making Neural Language Models Robust to ASR and Semantic Noise.
Chris Larson,
Tarek Lahlou,
Diana Mingles,
Zachary Kulis,
Erik Mueller
arXiv:1906.05678 [eess.AS], 2019
arxiv /
bibtex
(i) Language models can be made robust to ASR noise through phonetic and semantic perturbations
to training data. (ii) We achieve stateoftheart perplexity of 37.87 on the Penntree Bank corpus (among models trained only on that data source) using a characterbased language model and training procedure that eliminates correlation in sequential inputs at the minibatch level.


A Deformable Interface for Human Touch Recognition using Stretchable Carbon
Nanotube Dielectric Elastomer Sensors and Deep Neural Networks.
Chris Larson,
Joseph Spjut,
Ross Knepper,
Rob Sheppard
Soft Robotics, 2019
pdf /
arxiv /
project page /
bibtex
Neural networks can learn latent representations of deformation in elastic bodies, enabling deformable media
to be used as a communication medium.


Untethered Stretchable Displays for Tactile Interaction.
Bryan Peele,
Shuo Li,
Chris Larson,
Jason Cortell,
Ed Habtour,
Rob Sheppard
Soft Robotics, 2019
bibtex
We made a balloon version of the children's toy Simon that uses a vanishing touch interface.


Highly stretchable electroluminescent skin for optical signaling and tactile sensing.
Chris Larson,
B. Peele,
S. Li,
S. Robinson,
M. Totaro,
L. Beccai,
B. Mazzolai,
R. Sheppard
Science Magazine, 2016
pdf /
supplement /
interview /
bibtex
Ionic hydrogel electrodes are used to create a hyperelastic light display that can stretch 5X its original length, and expand its surface area by a factor of 6.5X, eclipsing the previous stateoftheart by a factor of 4.


Quantitative measurement of Q3 species in silicate and borosilicate glasses using Raman spectroscopy.
B.G. Parkinson,
D. Holland,
M.E. Smith,
Chris Larson,
J. Doerr,
M. Affatigato,
S.A. Feller,
A.P. Howes,
C.R. Scales
Journal of Non–crystalline Solids, 2008
bibtex


A ^{29}Si MAS NMR study of silicate glasses with high lithium content.
Chris Larson,
J. Doerr,
M. Affatigato,
S.A. Feller,
D. Holland,
M.E. Smith
Journal of Physics: Condensed Matter, 2006
bibtex


A Tensorflow adapter for Huggingface Models
Hugginface's Tranformers library provides an intuitive API (written in PyTorch) providing developers with easy access to stateoftheart language models (BERT/GPT2/XLNET as of this writing) that were originally built in Tensorflow. They also provide adapters that convert protobuf serialized model parameters to a pickle file that is loadable in PyTorch. On my team at Capital One, we make extensive use of transformer models, doing experimentation in both PyTorch and Tensorflow, while deploying production models exclusively in Tensorflow. In order to seamlessly push experimental features to production, I wrote a tool to convert BERT models from PyTorch back to Tensorflow, giving us complete interoperability between frameworks. This allows us to perform benchtop experimentation in either framework without having to worry about modifying our production ML stack. The tool has proven quite useful, so much so that we decided to contribute it back to the Huggingface community. The feature is now included in Hugginface's Tranformers library. You can find the converter here, and example driver code here.


The Ellipsoid method
The Ellipsoid method is an approach proposed by Schor to solve convex optimization problems. It was further developed by Yudin and Nemirovskii, and Leonid Khachiyan later used it in his derivation of the first polynomialtime algorithm for linear programming. On a theoretical basis, Khachiyans ellipsoid algorithm has polynomial bounds and is thus superior to the Simplex algorithm as a linear program, however in practice the Simplex algorithm is more efficient. Despite this, Khachiyans ellipsoid method is of fundamental importance to the analysis of linear programs and more generally in combinatorial optimization theory. In this post I summarize an approach based on Khachiyans algorithm. I also wrote a small c++ library that implements the algorithm.


A Brief Intro to Deep Reinforcement Learning
Reinforcement learning methods are an class of algorithms used to learn from an agents experience in an episodic setting. Much like how we learn from experience, an RL algorithm learns by sequentially executing actions, observing the outcome of those actions, and updating its behavior so as to achieve better outcomes in the future. This approach is distinct from traditional supervised learning, in which we attempt to model some unobserved probability distribution from a finite set of observations. In a certain sense, RL algorithms generate their own labels through experimentation. As labels are generated only through simulation, though, two characteristics of RL methods become evident: (i) sample inefficiency; a single observation may require a very large number of actions, and (ii) rewardallocation inefficiency for instance, although a sequence of chess moves may have produced a win, it says nothing about the relative quality of each individual move; thus, we cannot avoid reinforcing bad actions and penalizing good ones. The former can be viewed as a problem of sparse rewards while the second one introduced noise into the signal that we use in optimizing our learning model. Despite these drawbacks, RL methods are unique in that they enable us to optimize an agent to perform complex tasks of our choosing, such as playing chess, walking, or playing video games, and not just map inputs to outputs in a dataset. Within RL there are a few different entities that we can try to learn. Many of the modern RL methods, such as the actor critic approach, combine them. In this blog post I briefly describe the the basic mathematical concepts in RL.


OrbTouch > using deformation as a medium for humancomputer interaction
This minipost explores the use of deformation as a medium for human computer interaction. The question being asked is the following: can we use the shape of an object, and how it changes in time, to encode information? Here I present a deep learning approach to this problem and show that we can use a balloon to control a computer.


Robust regression methods
Time series data often have heavytailed noise. This is a form of heteroskedasticity, and it can be found throughout economics, finance, and engineering. In this post I share a few well known methods in the robust linear regression literature, using a toy dataset as an example throughout.


Deep reinforcement learning for checkers  pretraining a policy
This post discusses an approach to approximate a checkers playing policy using a neural network trained on a database of human play. It is the first post in a series covering the engine behind checkers.ai, which uses a combination of supervised learning, reinforcement learning, and game tree search to play checkers. This set of approaches is based on AlphaGo (see this paper). Unlike Go, checkers has a known optimal policy that was found using exhaustive tree search and end–game databases (see this paper). Although Chinook has already solved checkers, the game is still very complex with over 5 x 10^20 states, making it interesting application of deep reinforcement learning. To give you an idea, it took Jonathan Schaeffer and his team at UC Alberta 18 years (19892007) to complete to search the game tree endtoend. Here I will discuss how we can use a database of expert human moves to pretrain a policy, which will be the building block of the engine. Ill show that a pretrained policy can beat intermediate/advanced human players as well as some of the popular online checkers engines. In later posts we will take this policy and improve it through self play.

