I am a computer science PhD student at the Massachusetts Institute of Technology (MIT) — studying artificial intelligence through natural language processing and machine learning. I am lucky to be advised by Jacob Andreas.
I work on improving sequence modeling for language processing and understanding. Languages exhibit some notion of compositionality (productivity and systematicity) whereas current neural language learners lack required inductive biases to achieve this data efficiently. My recent work aims at understanding simple biases that will enable neural sequence models to achieve types of generalization that humans do
I am also interested in language supervision/grounding and worked on two recent projects: (i) using language to guide image classifiers to learn representations that enable learning of new classes (only with few samples) without forgetting the old ones, (ii) using language models to guide policy learning in a virtual home environment.
- I had a summer internship at Google Research and worked with Kelvin Guu and Keith Hall on fact attribution by using large language models.
- As a visiting student at @MIT CSAIL, I worked with Prof. Alan Edelman on linear algebraic formulation of backpropagation to facilitate existing matrix operations on neural computation graphs;
- I worked with John W. Fischer on efficient distributed algorithms for non-bayesian parametric;
- I was a part of KUIS AI Lab during my undergraduate education and I worked with Prof. Deniz Yuret on natural language processing.
- I received my Bachelor’s degrees in Electrical&Electronics Engineering and in Physics from Koç University in 2019.
Last updated Mar 29 '22