Sergei Bykov's starred repositories
LSTM_predict_merger_history
Jason's toy model for predicting halo mass from merger tree in TNG100
SupContrast
PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)
diffusion-models-astrophysical-fields-mlps
Code base for "Can denoising diffusion probabilistic models generate realistic astrophysical fields?"
halo_painting
Predicts 3D halo distributions from dark matter simulations using a physically motivated Wasserstein mapping network
llama3-from-scratch
llama3 implementation one matrix multiplication at a time
data-compression-inference-in-cosmology-with-SSL
Cosmological Data Compression and Inference with Self-Supervised Machine Learning
Advanced-Lane-Detection
Project: Advanced Lane Finding || Udacity: Self-Driving Car Engineer Nanodegree
ero-lh-class
SRG/eROSITA Lockman Hole source classification, accompanies the paper "SRG/eROSITA Survey in the Lockman Hole: Classification of X-ray Sources", published in Astronomy Letters.
annotated_deep_learning_paper_implementations
🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
ai_projects
AI related projects -- learning progress
ztf-viewer
ZTF data releases light curve viewer
ml-in-cosmology
A comprehensive list of published machine learning applications to cosmology
private-gpt
Interact with your documents using the power of GPT, 100% privately, no data leaks
kaggle-munich
the notebook for Kaggle Munich SHAP 101 talk
RainbowLasso
RainbowLasso compiles matched aperture fluxes from ultraviolet to infrared for all-sky surveys.
planck_szcat
Planck U-Net and y-map SZ catalogs
Hyper-parameter_optimization_for_Random_Forest
In this repository we optimize the random forest (RF) hyper-parameters for the dataset; DR16 cross-matched with the WISE catalogue. In this case, we trained the algorithms on about 80% of the dataset to find the best parameter settings for the algorithms to best estimate the photometric redshifts using the sk-learn RandomisedSearchCV. We used the "neg_mean_squared_error", "neg_median_absolute_deviation" and both "neg_mean_squared_error" and "neg_median_absolute_deviation" as a scoring metrics. The "neg_median_absolute_deviation" yields best results for this project.