Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models
Published in CVPR (Highlight), 2024
We discover and quantify concept connections in multi-layer deep models.
Download here
Published in CVPR (Highlight), 2024
We discover and quantify concept connections in multi-layer deep models.
Download here
Published in CVPR (Highlight), 2024
We design a method for discovering universal spatiotemporal concepts in deep video transformers.
Download here
Published in International Journal of Computer Vision, 2024
We explore the existance of absolute position information in CNNs in relation to their padding type or other border heuristics.
Download here
Published in TPAMI, 2022
We quantify, and design algorithms to learn, static and dynamic information in deep spatiotemporal networks.
Download here
Published in CVPR, 2022
We quantify and explore the static and dynamic information contained in deep spatiotemporal networks.
Download here
Published in BMVC, 2021
We design a novel, efficient, and robust weakly-supervised method for generating semantic segmentation pseudo-labels from CAMs or bounding boxes.
Download here
Published in ICCV, 2021
In this paper, we challenge the common assumption that collapsing the spatial dimensions of a 3D (spatial-channel) tensor in a convolutional neural network (CNN) into a vector via global pooling removes all spatial information. Specifically, we demonstrate that positional information is encoded based on the ordering of the channel dimensions, while semantic information is largely not.
Download here
Published in International Journal of Computer Vision, 2021
We present a strategy for training convolutional neural networks for the task of dense image labelling by blending images based on (i) categorical clustering or (ii) the co-occurrence likelihood of categories.
Download here
Published in ICLR, 2021
We present two approaches for quantifying the shape and texture encoded in convolutional neural networks.
Download here
Published in BMVC (Oral), 2020
We propose a new type of data augmentation for dense labelling tasks. We train neural networks to seperate and label mixed images based on their co-occurance probabilities.
Download here