jjeamin / stock_trader

stock trader using reinforcement

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

⏰ Stock Trader

Hallym Univ. Reinforcement Project

  • korea stock market : KOSPI200
  • reinforcement learning

🌈 Data

  • KRX ν•œκ΅­ κ±°λž˜μ†Œμ—μ„œ μˆ˜μ§‘

Download

python download --start_date [DATE] --end_date [DATE]

πŸ“³ Day Bot

KOSPI200μ—μ„œ νˆ¬μžν•  νšŒμ‚¬λ₯Ό μ„ νƒν•΄μ£ΌλŠ” Bot

Train

python train.py

Test

python test.py --load_path [./checkpoint/YOUR_MODEL]

🍩 Env

Reward

  • ν•œ νšŒμ‚¬μ˜ ν™•λ₯  κ°’
reward = change(CC) * action(one-hot encoding vector)

State

$$- Input Shape : (num company, window size, num feature) - Num company : 200 -> 200개의 νšŒμ‚¬ 데이터 - Window size : 10 -> 10일씩 λ³Έλ‹€. 즉, 2μ£Ό - Num feature : CO, HO, LO, OO, CC, HC, LC, OC, 거래율, λŒ€λΉ„μœ¨ + CO : Close(T-1) / Open(T-1) + HO : High(T-1) / Open(T-1) + LO : Low(T-1) / Open(T-1) + OO : Open(T) / Open(T-1) + CC : (Close(T)-Close(T-1)) / Close(T-1) + HC : (High(T)-Close(T)) / Close(T) + LC : (Low(T)-Close(T)) / Close(T) + OC : (Open(T)-Close(T-1)) / Close(T-1) + 거래율 : Volume(T) / Total Share + λŒ€λΉ„μœ¨ : Change(T) / Close(T-1)$$

πŸ€– Agent

Model

# lib/agent/agents.py

model = tf.keras.Sequential()
            model.add(Conv2D(128, kernel_size=(1, 3), strides=1, activation="relu", input_shape=input_shape))
            model.add(MaxPool2D(pool_size=(1, 2)))
            model.add(Conv2D(64, kernel_size=(1, 4), strides=1, activation="relu"))
            model.add(Conv2D(1, kernel_size=1, activation="sigmoid"))
            model.add(Flatten())

Optimizer = Adam

Reference

About

stock trader using reinforcement


Languages

Language:Python 100.0%