Andrey Kuehlkamp's repositories
emvlc-ipad
Ensemble of Multi-View Learning Classifiers for Cross-Domain Iris Presentation Attack Detection
xai4b_tlpim
Supplementary material for the paper "Interpretable Deep Learning-Based Forensic Iris Segmentation and Recognition", published at the Explainable AI for Biometrics Workshop at WACV2022
Class-balanced-loss-pytorch
Pytorch implementation of the paper "Class-Balanced Loss Based on Effective Number of Samples"
detectron2
Detectron2 is FAIR's next-generation platform for object detection, segmentation and other visual recognition tasks.
docker-stacks
Ready-to-run Docker images containing Jupyter applications
dockerized-hadoop
Files to create Hadoop docker images
ethereum-etl
Python scripts for ETL (extract, transform and load) jobs for Ethereum blocks, transactions, ERC20 / ERC721 tokens, transfers, receipts, logs, contracts, internal transactions. Data is available in Google BigQuery https://goo.gl/oY5BCQ
ethereum-etl-postgres
ETL for moving Ethereum data to PostgreSQL database
ethereum-input-decoder
Decode transaction inputs based on the contract ABI
flow-py-sdk
Unofficial flow blockchain python sdk
ganache-cli
Fast Ethereum RPC client for testing and development
Gender_From_Iris_1
Gender-From-Iris or Gender-From-Mascara? - WACV 2017 Paper
goingmeta-1
code and resources used in the Going Meta sessions
GraphSmote
Pytorch implementation of paper 'GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks' to appear on WSDM2021
hello_dapr
Minimal containerized dapr application
historical-price-feed-data
Chainlink Data Feeds Historical Data
lighter
REST API for Apache Spark on K8S
polygon-etl
ETL (extract, transform and load) tools for ingesting Polygon blockchain data to Google BigQuery and Pub/Sub
pyspark-devcontainer
A simple VS Code devcontainer setup for local PySpark development
redisgraph-bulk-loader
A Python utility for building RedisGraph databases from CSV inputs
spark-standalone-cluster-on-docker
Learn Apache Spark in Scala, Python (PySpark) and R (SparkR) by building your own cluster with a JupyterLab interface on Docker. :zap: