liujihai (yellow123Nike)

yellow123Nike

Geek Repo

Company:上海海洋大学

Location:上海

Github PK Tool:Github PK Tool

liujihai's starred repositories

pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Language:PythonLicense:NOASSERTIONStargazers:81502Issues:1740Issues:44311

pytorch-cpp

C++ Implementation of PyTorch Tutorials for Everyone

Language:C++License:MITStargazers:1926Issues:51Issues:65

tf-encrypted

A Framework for Encrypted Machine Learning in TensorFlow

Language:PythonLicense:Apache-2.0Stargazers:1199Issues:53Issues:437

riscv-v-spec

Working draft of the proposed RISC-V V vector extension

Language:AssemblyLicense:CC-BY-4.0Stargazers:941Issues:131Issues:704

lolremez

📈 Polynomial Approximations using the Remez Algorithm

Language:C++License:WTFPLStargazers:394Issues:22Issues:25
Language:C++License:MITStargazers:121Issues:2Issues:0

Literatures-on-Homomorphic-Encryption

A reading list for homomorphic encryption

License:MITStargazers:77Issues:5Issues:0

FHE-MP-CNN

Implementation of deep ResNet model on CKKS scheme in Microsoft SEAL library using multiplexed parallel convolution

Language:C++License:MITStargazers:57Issues:0Issues:0

PP-CNN

Privacy Preserving Convolutional Neural Network using Homomorphic Encryption for secure inference

Language:C++License:Apache-2.0Stargazers:44Issues:2Issues:2

LibTorch-ResNet-CIFAR

ResNet Implementation, Training, and Inference Using LibTorch C++ API

Language:C++License:MITStargazers:34Issues:3Issues:1

CryptoDL

Privacy-preserving Deep Learning based on homomorphic encryption (HE)

TrainableActivation

Implementation for the article "Trainable Activations for Image Classification"

Language:PythonLicense:MITStargazers:20Issues:1Issues:3

openfhe-logreg-training-examples

OpenFHE-Based Examples of Logistic Regression Training using Nesterov Accelerated Gradient Descent

Language:C++License:BSD-2-ClauseStargazers:16Issues:1Issues:3
Language:PythonLicense:MITStargazers:11Issues:1Issues:1
Language:PythonStargazers:9Issues:0Issues:1

PrivDL

code for the paper: PRIVACY-PRESERVING DEEP LEARNING: LEVERAGING DEFORMABLE OPERATORS FOR SECURE TASK LEARNING

Language:PythonStargazers:6Issues:1Issues:0

fabric-crosschain

Fabric crosschain research

Language:ShellLicense:Apache-2.0Stargazers:4Issues:1Issues:0

seccomp

Secure comparison functions written with HEAAN code

Language:CStargazers:3Issues:2Issues:0

function-approximation

Approximating a sine function with polynomials using Tensorflow, demonstrating how to use weights, SSE loss function, gradient descent, and backpropagation

Language:Jupyter NotebookStargazers:2Issues:2Issues:0

Approximating-Continuous-Function-With-ReLU

ReLU's Function Approximation

Language:Jupyter NotebookLicense:MITStargazers:2Issues:1Issues:0

Exploring_Neural_Network

A quartic function approximator was built using Tensorflow and Keras API to approximate how ReLU, tanh, and sigmoid activation functions behave using shuffled and unshuffled dataset using two different network structures. Also, a multilayer neural network was designed to approximate XOR function encountered in digital logic.

Language:Jupyter NotebookStargazers:1Issues:2Issues:0

minimaxApprox

:exclamation: This is a read-only mirror of the CRAN R package repository. minimaxApprox — Implementation of Remez Algorithm for Polynomial and Rational Function Approximation. Homepage: https://github.com/aadler/MiniMaxApprox Report bugs for this package: https://github.com/aadler/MiniMaxApprox/issues

Language:RStargazers:1Issues:3Issues:0

PolynomialLR_withoutLibrary

Polynomial linear regression: First generate dataset using sine function and add Gaussian noise with mean zero and variance 0,1. Consider the polynomial equation of degree k in x having parameters β0, β1 ... ... . βk. Derive the loss function for each k-degree and optimize it using gradient descent.

Language:Jupyter NotebookStargazers:1Issues:1Issues:0

ML_project

Project that consists of approximating a function using neural networks with varying numbers of layers in the set L=[2,4,8,16,32,64,128] and also varying numbers of neurons in the set N=[2,4,8,16,32,64,128,256]. All these were implemented for 3 different activations of hidden layers: ReLU, Tanh, Sigmoid.

Language:Jupyter NotebookStargazers:1Issues:1Issues:0

Parameter-optimization-for-DeepZ-robustness-verification

DeepZ is a method for local robustness verification, based on zonotopes, a type of convex relaxations, and abstract transformations. While affine transformations and convolutions can be represented exactly using this framework, a sound over-approximation has to be used to approximate the ReLU function. The abstract transformer used in DeepZ is parameterized by one parameter per hidden unit. In this work we propose an optimization strategy for this parameterization to increase the maximum verifiable image perturbation for both fully-connected and convolutional network architectures.

Language:PythonStargazers:1Issues:2Issues:0