I’m a PostDoc in theoretical neuroscience at McGill/Mila with Blake Richards.
I work on biologically plausible learning in deep networks (NeurIPS 2020 work on 3-factor Hebbian learning rules for deep nets; NeurIPS 2021 on plausibility of convolutional networks; 2023 arXiv preprint on the link between synaptic weight geometry and weight distributions). I also work on representation learning using kernel methods. Here are my works on self-supervised learning (NeurIPS 2021) and measures of conditional dependence (CIRCE: notable-top-5% at ICLR 2023; SplitKCI: pre-print).
I did my PhD in theoretical neuroscience with Peter E. Latham at Gatsby Computational Neuroscience Unit (UCL, London). During my PhD, I did a breadth rotation project in multi-armed bandits with Tor Lattimore (DeepMind). Before joining Gatsby in 2017, I received a BSc (honours) in applied mathematics and physics from Moscow Institute of Physics and Technology (2013-2017). I also finished the first year of the Yandex School of Data Analysis (2016-2017, CS department).
Besides that, I did a few research internships:
- Skoltech, Moscow, Yury Maximov’s research group (2016-2017);
- EPFL, SRP program, Wulfram Gerstner’s lab (summer 2016);
- LMU, Amgen Scholars program, Christian Leibold’s lab (summer 2015).
Email: name.surname@mila.quebec