Lennart Brocki (lenbrocki)

lenbrocki

Geek Repo

0

followers

0

following

Github PK Tool:Github PK Tool

Lennart Brocki's repositories

concept-saliency-maps

Contains the jupyter notebooks to reproduce the results of the paper "Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models" https://arxiv.org/pdf/1910.13140.pdf

Language:Jupyter NotebookStargazers:7Issues:2Issues:3

ConRad

Code to reproduce the results of our ConRad paper

Language:Jupyter NotebookStargazers:5Issues:2Issues:0

NoBias-Rectified-Gradient

We introduce a modification of Rectified Gradient. This repository is forked from https://github.com/1202kbs/Rectified-Gradient

Language:Jupyter NotebookStargazers:4Issues:3Issues:0

CDAM

Official implementation of CDAM

Language:Jupyter NotebookStargazers:3Issues:0Issues:0

Feature-Perturbation-Augmentation

This repository contains the code to reproduce the results of our paper Feature Perturbation Augmentation (FPA)

Language:Jupyter NotebookStargazers:1Issues:2Issues:0

Conditional_Diffusion_LIDC

Conditional diffusion model to generate LIDC. Minimal script. Based on 'Classifier-Free Diffusion Guidance'.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

DeepExplain

A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

dino

PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

gitignore

A collection of useful .gitignore templates

License:CC0-1.0Stargazers:0Issues:0Issues:0

saliency

Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

SliceViewer

Simple Jupyter widget for viewing slices of 3D images

Language:PythonStargazers:0Issues:0Issues:0

Transformer-MM-Explainability

[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0