About Me

I am a Member of Technical Staff at FAR AI, exploring challenges in AI safety with a focus on mechanistic interpretability and persuasion of large language models. Previously, I completed my PhD (funded by an NSERC CGS-D scholarship) from York University, where I was part of the CVIL Lab under the supervision of Dr. Kosta Derpanis. My doctoral research focused on interpretability of multi-modal and video understanding systems.

My journey in AI research includes impactful industry experiences, such as internships at Ubisoft La Forge, where I worked on generative modeling for character animations, and at Toyota Research Institute, contributing to interpretability research for video transformers within the machine learning team. Additionally, I hold a position as a faculty affiliate researcher at the Vector Institute and previously served as Lead Scientist in Residence at NextAI (2020–2022).

My academic path began with a Bachelor of Applied Science (B.A.Sc) in Applied Mathematics and Engineering, specialized in Mechanical Engineering from Queen’s University in Kingston, Ontario. Following graduation, I gained practical engineering experience at Morrison Hershfield, collaborating in multidisciplinary teams to design buildings, laboratories, and residential projects.

I then earned my Master’s degree at the Ryerson Vision Lab in August 2020, co-supervised by Dr. Neil Bruce and Dr. Kosta Derpanis. My thesis focused on evaluating the effectiveness of various modalities in recognizing human actions.

Outside of research, I am a chronic hobbyist. I enjoy maintaining an active lifestyle focused on health and fitness through weightlifting and calisthenics, engaging competitively in Super Smash Bros. Melee, cooking, skateboarding, rock climbing, birdwatching, close-up magic, and immersing myself in the progressive house and techno music scene.

Project Highlights

Project 3 Image
Preprint, 2025
Into the Rabbit Hull: From Task-Relevant Concepts in DINO to Minkowski Geometry
A deep dive into the semantics and geometry of concepts in large vision models.
Paper, project page,
Project 2 Image
Preprint, 2025
It’s the Thought that Counts: Evaluating the Attempts of Frontier LLMs to Persuade on Harmful Topics
We introduce the AttemptPersuadeEval (APE) and show that frontier models attempt to persuade users into harmful topics.
Paper, Eval Code
Project 1 Image
ICML 2025
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
We train SAEs to discover universal and unique concepts across different models.
Paper, Code
Project 1 Image
ICML 2025
Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept Extraction in Large Vision Models
We design SAEs that solve the issue of stability across different training runs
Paper, Code
Project 1 Image
Spotlight @ CVPR 2024
Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models
Unsupervised discovery of concepts and their interlayer connections.
Paper, project page, demo.
Project 2 Image
Spotlight @ CVPR 2024
Understanding Video Transformers via Universal Concept Discovery
We discover universal spatiotemporal concepts in video transformers.
Paper, project page.
Project 3 Image
CVPR 2022 and Spotlight @ XAI4CV Workshop
A Deeper Dive Into What Deep Spatiotemporal Networks Encode
We develop a new metric for quantifying static and dynamic information in deep spatiotemporal models.
Paper, project page.
Project 4 Image
ICCV 2021
Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs
We show how spatial position information is encoded along the channel dimensions after pooling layers.
Paper, code.
Project 5 Image
ICLR 2021
Shape or Texture: Understanding Discriminative Features in CNNs
We develop a new metric for shape and texture information encoded in CNNs.
Paper, project page.
Project 6 Image
Oral @ BMVC 2020
Feature Binding with Category-Dependent MixUp for Semantic Segmentation and Adversarial Robustness
Source separation augmentation improves semantic segmentation and robustness.
Paper.

News