Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
Deep neural networks (DNNs) are known to be almost fully opaque compared to traditional algorithms, even to top engineers and scientists who use DNNs. Many reasons have been proposed for why understanding the decision making process of these models is important, such as human curiosity, scientific discovery, bias detection, or algorithm safety measures and auditing. Furthermore, a model which is interpretable may exhibit more fairness, reliability, and trust to the general public 1. Several methods to interpret DNNs in computer vision have been proposed to varying degrees of success. For example, many recent SOTA saliency map visualization methods have been shown to be on par with some random baselines. Similar issues with other interpretability methods have been criticized on social media, pointing out that the reliance on these methods in mission critical situations will likely do more harm than good.
Published:
For this post I hope to accomplish a few different things. I will review a brilliant document put together by Erik G. Learned-Miller. I found this document when attempting to better understand the concept of ‘Mutual Information’, and it has been by far the most influential document in my understanding of entropy and mutual information. It is a short document, only 3 pages, with the purpose of being an introduction to entropy and mutual information for discrete random variables. Erik does something that I think so many teachers miss when introducing students to new concepts, which is using real-world, easy to understand examples to aid with the formulas. I hope I can add some intuition behind what entropy, joint entropy, and mutual information actually represent, as well as review some simple and more complex examples that I am currently working on in my research.
Published in BMVC (Oral), 2020
We propose a new type of data augmentation for dense labelling tasks. We train neural networks to seperate and label mixed images based on their co-occurance probabilities.
Download here
Published in ICLR, 2021
We present two approaches for quantifying the shape and texture encoded in convolutional neural networks.
Download here
Published in International Journal of Computer Vision, 2021
We present a strategy for training convolutional neural networks for the task of dense image labelling by blending images based on (i) categorical clustering or (ii) the co-occurrence likelihood of categories.
Download here
Published in ICCV, 2021
In this paper, we challenge the common assumption that collapsing the spatial dimensions of a 3D (spatial-channel) tensor in a convolutional neural network (CNN) into a vector via global pooling removes all spatial information. Specifically, we demonstrate that positional information is encoded based on the ordering of the channel dimensions, while semantic information is largely not.
Download here
Published in BMVC, 2021
We design a novel, efficient, and robust weakly-supervised method for generating semantic segmentation pseudo-labels from CAMs or bounding boxes.
Download here
Published in CVPR, 2022
We quantify and explore the static and dynamic information contained in deep spatiotemporal networks.
Download here
Published in TPAMI, 2022
We quantify, and design algorithms to learn, static and dynamic information in deep spatiotemporal networks.
Download here
Published in International Journal of Computer Vision, 2024
We explore the existance of absolute position information in CNNs in relation to their padding type or other border heuristics.
Download here
Published in CVPR (Highlight), 2024
We design a method for discovering universal spatiotemporal concepts in deep video transformers.
Download here
Published in CVPR (Highlight), 2024
We discover and quantify concept connections in multi-layer deep models.
Download here
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.