About Me

I am an NSERC CGS-D scholarship funded PhD student at York University in the Computational Vision and Imaging Lab supervised by Dr. Kosta Derpanis. My research interest is mainly in interpreting the internal representations of deep neural networks with an emphasis on video understanding. I am currently a research intern at Ubisoft La Forge working on generative modelling for character animations. I recently completed an internship at Toyota Research Institute as a member of the machine learning team working on interpretability for video transformers. I am also a faculty affiliate researcher at the Vector Institute and have previously held the position of Lead Scientist in Residence at NextAI from 2020-2022.

Before my PhD studies, I completed a Bachelors of Applied Science (B.A.Sc) in Applied Mathematics and Engineering with a specialization in Mechanical Engineering at Queen’s University in Kingston, Ontario, Canada. After graduating from my Bachelors degree, I worked at Morrison Hershfield as a mechanical design engineer in Toronto. I helped with the design of buildings, labratories, and condos, with a team of other mechanical, evironmental, electrical, and control engineers. I then obtained my M.Sc at the Ryerson Vision Lab in August 2020 under the co-supervision of Dr. Neil Bruce and Dr. Konstantinos Derpanis. My M.Sc thesis focused on quantifying the utility of different modalities for action recognition.

My hobbies include health and fitness, competitive Super Smash Bros. Melee, birds, close up magic, and progressive house music.

Project Highlights

Project 1 Image
(Spotlight @ CVPR 2024)
Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models
Unsupervised discovery of concepts and their interlayer connections.
Paper, project page, demo.
Project 2 Image
Spotlight @ CVPR 2024
Understanding Video Transformers via Universal Concept Discovery
We discover universal spatiotemporal concepts in video transformers.
Paper, project page.
Project 3 Image
CVPR 2022 and Spotlight @ XAI4CV Workshop
A Deeper Dive Into What Deep Spatiotemporal Networks Encode
We develop a new metric for quantifying static and dynamic information in deep spatiotemporal models.
Paper, project page.
Project 4 Image
ICCV 2021
Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs
We show how spatial position information is encoded along the channel dimensions after pooling layers.
Paper, code.
Project 5 Image
ICLR 2021
Shape or Texture: Understanding Discriminative Features in CNNs
We develop a new metric for shape and texture information encoded in CNNs.
Paper, project page.
Project 6 Image
Oral @ BMVC 2020
Feature Binding with Category-Dependent MixUp for Semantic Segmentation and Adversarial Robustness
Source separation augmentation improves semantic segmentation and robustness.
Paper.

News

  • Gave an invited talk at David Bau’s Lab at Northeastern University!
  • Gave an invited talk at Thomas Serre’s Lab at Brown University!
  • Paper accepted to TPAMI! Quantifying and Learning Static vs. Dynamic Information in Deep Spatiotemporal Networks. Paper
  • TWO papers accepted as Highlights at CVPR 2024!
  • CVPR 2024 paper accepted as a Highlight, a result of my Internship at Toyota Research Institute - Understanding Video Transformers via Universal Concept Discovery. Paper and project page! We will also be presenting this work as a poster at the Causal and Object-Centric Representations for Robotics Workshop
  • CVPR 2024 paper accepted as a Highlight - Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models. Paper and project page. We will also be presenting this work as a poster at the CVPR Explainable AI for Computer Vision Workshop
  • CAIC 2024 long paper accepted - Multi-modal News Understanding with Professionally Labelled Videos (ReutersViLNews) . Paper.
  • Paper accepted to the International Journal of Computer Vision (IJCV) - Position, Padding and Predictions: A Deeper Look at Position Information in CNNs . Paper.
  • I have been awarded the NSERC CGS-D Scholarship with a total value of $105,000! (Accepted)
  • I have accepted an offer to do a research internship at Toyota Research Institute for the Summer of 2023 at the Palo Alto HQ office!
  • I gave a talk at Vector’s Endless Summer School program on Current Trends in Computer Vision and a CVPR 2022 Recap
  • Paper accepted to the International Journal of Computer Vision (IJCV) - SegMix: Co-occurrence Driven Mixup for Semantic Segmentation and Adversarial Robustness. Paper.
  • I presented a spolight presentation at the Explainable AI for Computer Vision Workshop at CVPR 2022. You can watch the recorded talk here.
  • Paper Accpted to CVPR 2022 - A Deeper Dive Into What Deep Spatiotemporal Networks Encode: Quantifying Static vs. Dynamic Information. Paper and Project Page.
  • Paper Accepted to ICCV 2021 - Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs. Paper.
  • Paper Accepted to BMVC 2021 - Simpler Does It: Generating Semantic Labels with Objectness Guidance. Paper.
  • Paper Accepted to ICLR 2021 - Shape or Texture: Understanding Discriminative Features in CNNs. Paper.
  • Paper Accepted as an Oral to BMVC 2020 - Feature Binding with Category-Dependant MixUp for Semantic Segmentation and Adversarial Robustness. Paper.