About Me

I am currently doing my PhD under the supervision of Dr. Kosta Derpanis at York University. I am a postgraduate affiliate at the Vector Institute and the Lead Scientist in Residence at NextAI. My research focuses on designing interpretable deep learning algorithms for video understanding.

I completed a Bachelors of Applied Science (B.A.Sc) in Applied Mathematics and Engineering with a specialization in Mechanical Engineering at Queen’s University in Kingston, Ontario, Canada. My capstone project was done on eye-tracking for the purpose of preventing driver inattention in automobiles.

After graduating from my Bachelors degree, I worked for two years at Morrison Hershfield as a mechanical design engineer (in training). I worked on a team with mechanical, evironmental, electrical, and control engineers on buildings, labratories, and condos. Some projects I worked on were: 1 Bloor East Condominium, the Guelph Ontario Veterinarian College Animal Isolation Facility, and the University of Toronto Sandford Rooftop Physics Lab. I then obtained my M.Sc at the Ryerson Vision Lab in August 2020 under the co-supervision of Dr. Neil Bruce and Dr. Kosta Derpanis. My M.Sc thesis focused on action recognition and weakly supervised semantic segmentation.

I have worked as a Scientist in Residence (SiR) at NextAI for the past three years, where I work closely with multiple AI-based startups in Toronto. As an SiR, I provide technical support to the cohort. For example, I help the teams implement and understand state-of-the-art research papers which relate to their business or product, reimplement recent papers in a specific language, or help with the teams technical hiring. Here is an incomplete list of the companies I worked with: Origami-XR, Future Fertility, VideoLogic, NoLeak Defence, Argentum.

My hobbies include health and fitness, competitive Super Smash Bros. Melee, birds, close up magic, and progressive house music.


  • I presented a spolight presentation at the Explainable AI for Computer Vision Workshop at CVPR 2022. You can watch the recorded talk here.
  • Paper Accpted to CVPR 2022 - A Deeper Dive Into What Deep Spatiotemporal Networks Encode: Quantifying Static vs. Dynamic Information. Paper and Project Page.
  • Paper Accepted to ICCV 2021 - Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs. Paper.
  • Paper Accepted to BMVC 2021 - Simpler Does It: Generating Semantic Labels with Objectness Guidance. Paper.
  • Paper Accepted to ICLR 2021 - Shape or Texture: Understanding Discriminative Features in CNNs. Paper.
  • Paper Accepted as an Oral to BMVC 2020 - Feature Binding with Category-Dependant MixUp for Semantic Segmentation and Adversarial Robustness. Paper.