There are 4 repositories under feature-learning topic.
Pytorch implementation of Center Loss
[CVPR 2017] Unsupervised deep learning using unlabelled videos on the web
Experiments on unsupervised point cloud reconstruction.
A simple Tensorflow based library for deep and/or denoising AutoEncoder.
OhmNet: Representation learning in multi-layer graphs
Fast, high-quality forecasts on relational and multivariate time-series data powered by new feature learning algorithms and automated ML.
Temporal-spatial Feature Learning of DCE-MR Images via 3DCNN
Feature learning over RDF data and OWL ontologies
Deep Co-occurrence Feature Learning for Visual Object Recognition (CVPR 2017)
Online feature-extraction and classification algorithm that learns representations of input patterns.
Experiments on point cloud segmentation.
Easy-to-read implementation of self-supervised learning using vision transformer and knowledge distillation with no labels - DINO :smiley:
Self-Supervised Feature Learning by Learning to Spot Artifacts. In CVPR, 2018.
This is an implementation of the Center Loss article (2016).
convGRU based autoencoder for unsupervised & spatial-temporal anomaly detection in computer network (PCAP) traffic.
Miami Machine Learning Meetup - Feature Learning with Matrix Factorization and Neural Networks
Stochastic processes insights from VAE. Code for the paper: Learning minimal representations of stochastic processes with variational autoencoders.
Image Classification via Transfer Learning: Using Pre-trained Densely Connected Convolutional Network (DenseNet) weights
Experiment with World Models by Ha et al. using Variational Recurrent Neural Networks for more task relevant feature learning
A zero-shot document classifier.
Associated codebase for the paper "Learning Mixtures of Separable Dictionaries for Tensor Data: Analysis and Algorithms"
Ensembles and hyperparameter optimization for clustering pipelines.
In this project, we've tried applying various DNNs to the problem of non-intrusive load monitoring (NILM) and compared their results for various appliances using the REDD dataset. We took a sliding window approach in hopes that we'll be able to achieve real time disaggregation with further tuning and testing. We compare the disaggregated energy consumption results based on MSE, MAE, Relative Error and F1 Score.
Code for reproducing the paper "Dissecting the Effects of SGD Noise in Distinct Regimes of Deep Learning"
A modified COLMAP to take as input multi-channel images. It can be used to evaluate the proposed multi-channel feature/descriptor.
Learning interpretable single-cell morphological profiles from 3D Cell Painting z-stacks
Implementation of the paper Training Triplet Networks with GAN
We aim to illustrate the difference between feature extraction and feature learning. We see that when using classical machine learning models, there is a requirement to come up with features (input to the model) “explicitly”, that would give the best and suitable output for the task in hand. However, when using deep learning models, these features are derived “implicitly” by the model as the training progresses.