Welcome to State of the Smerity
What is State of the Smerity? In short, State of the Smerity is an experiment in writing.
My name is @Smerity and I'm an independent AI researcher. I'm passionate about machine learning, open data, and teaching computer science.
My most recent work has been in chasing a ball of linguistic yarn as it rolls around a thousand dimensional space.
As a short summary of my recent work history before going solo:
- I was a senior research scientist at the startup MetaMind, eventually acquired to become Salesforce Research
- A lone engineer at the non-profit Common Crawl where I crawled over 2.5 petabytes and 35 billion webpages
Between Salesforce Research and Common Crawl both my models and datasets have been a recurring feature of the latest trend in machine learning, unsupervised language models. You might have seen my work:
- Single Head Attention RNN, an attempt to show that strong results can still be achieved with minimal resources and entirely novel language model architectures (i.e. "alternate research history")
- The WikiText Long Term Dependency Language Modeling Dataset, used as a language modeling benchmark for hundreds of papers
- The AWD-LSTM, used as the basis of Fast.AI's ULMFiT and many other language models
- The Quasi-Recurrent Neural Network, an RNN architecture with substantial speed advantages over the LSTM that was used in production for Google's GBoard handwriting recognition, Baidu's Deep Voice 2, as an optional LSTM replacement of ULMFiT, and others