pedromlsreis / keras-radam

RAdam implemented in Keras & TensorFlow

Home Page:https://pypi.org/project/keras-rectified-adam/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Keras RAdam

Travis Coverage Version Downloads License

[中文|English]

Unofficial implementation of RAdam in Keras and TensorFlow.

Install

pip install keras-rectified-adam

External Link

Usage

import keras
import numpy as np
from keras_radam import RAdam

# Build toy model with RAdam optimizer
model = keras.models.Sequential()
model.add(keras.layers.Dense(input_shape=(17,), units=3))
model.compile(RAdam(), loss='mse')

# Generate toy data
x = np.random.standard_normal((4096 * 30, 17))
w = np.random.standard_normal((17, 3))
y = np.dot(x, w)

# Fit
model.fit(x, y, epochs=5)

TensorFlow without Keras

from keras_radam.training import RAdamOptimizer

RAdamOptimizer(learning_rate=1e-3)

Use Warmup

from keras_radam import RAdam

RAdam(total_steps=10000, warmup_proportion=0.1, min_lr=1e-5)

Q & A

About Correctness

The optimizer produces similar losses and weights to the official optimizer after 500 steps.

Use tf.keras or tf-2.0

Add TF_KERAS=1 to environment variables to use tensorflow.python.keras.

Use theano Backend

Add KERAS_BACKEND=theano to environment variables to enable theano backend.

About

RAdam implemented in Keras & TensorFlow

https://pypi.org/project/keras-rectified-adam/

License:MIT License


Languages

Language:Python 99.3%Language:Shell 0.7%