Yunhan Jia's repositories
flink
Apache Flink
jiayunhan.github.io
Yunhan's personal page
flink-recommandSystem-demo
:helicopter::rocket:基于Flink实现的商品实时推荐系统。flink统计商品热度,放入redis缓存,分析日志信息,将画像标签和实时记录放入Hbase。在用户发起推荐请求后,根据用户画像重排序热度榜,并结合协同过滤和标签两个推荐模块为新生成的榜单的每一个产品添加关联产品,最后返回新的用户列表。
foolbox
Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, Keras, …
movie-swiper
React Native client for TMDb 🎬 https://www.themoviedb.org
symbolic_interval
The library for symbolic interval
perceptron-benchmark
Robustness benchmark for DNN models.
ijcai_defense
Test
dispersion_reduction
Enhancing cross-task transferability with dispersion reduction
neural-style
Neural style in TensorFlow! :art:
unrestricted-adversarial-examples
Contest Proposal and infrastructure for the Unrestricted Adversarial Examples Challenge
keras-yolo3
A Keras implementation of YOLOv3 (Tensorflow backend)
sourcegraph
Code search and intelligence, self-hosted and scalable
keras-retinanet
Keras implementation of RetinaNet object detection.
tensorfuzz
A library for performing coverage guided fuzzing of neural networks
darknet
Convolutional Neural Networks
docs
Documentation and Quick Start Guides for the S2E Symbolic Execution Platform
deep-anpr
Using neural networks to build an automatic number plate recognition system
libprotobuf-mutator
Fork from google/libprotobuf-mutator. Integrate to provide structure-aware fuzzing support for Apollo
cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
infer
A static analyzer for Java, C, C++, and Objective-C
afl-fuzz
Non-official repository for lcamtuf's American Fuzzy Lop http://lcamtuf.coredump.cx/afl/
rusty-machine
Machine Learning library for Rust
robust-physical-attack
Physical adversarial attack for fooling the Faster R-CNN object detector
robust_physical_perturbations
Public release of code for Robust Physical-World Attacks on Deep Learning Visual Classification (Eykholt et al., CVPR 2018)
adversarial-examples
Adversarial Examples: Attacks and Defenses for Deep Learning