Wenqi-Z / Sym-Q

An official Pytorch implementation for the paper "Sym-Q: Adaptive Symbolic Regression via Sequential Decision-Making".

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Sym-Q

overview Inspired by how human experts refine and adapt expressions, we introduce Symbolic Qnetwork (Sym-Q), a novel reinforcement learningbased model that redefines symbolic regression as a sequential decision-making task. Sym-Q leverages supervised demonstrations and refines expressions based on reward signals indicating the quality of fitting precision. Its distinctive ability to manage the complexity of expression trees and perform precise step-wise updates significantly enhances flexibility and efficiency. Our results demonstrate that Sym-Q excels not only in recovering underlying mathematical structures but also uniquely learns to efficiently refine the output expression based on reward signals, thereby discovering underlying expressions. Sym-Q paves the way for more intuitive and impactful discoveries in physical science, marking a substantial advancement in the field of symbolic regression. This repository contains the official Pytorch implementation for the paper "Sym-Q: Adaptive Symbolic Regression via Sequential Decision-Making".

Workspace Setup

To create the symbolic regression workspace with the necessary repositories, follow these steps:

mkdir symbolic_ws
cd symbolic_ws
git clone https://github.com/SymposiumOrganization/NeuralSymbolicRegressionThatScales.git
git clone https://github.com/AILWQ/Joint_Supervised_Learning_for_SR.git
git clone https://github.com/Wenqi-Z/Sym-Q.git
cp -R NeuralSymbolicRegressionThatScales/src/nesymres Joint_Supervised_Learning_for_SR/src
mkdir -p Joint_Supervised_Learning_for_SR/Dataset/2_var/5000000/Train
mkdir -p Joint_Supervised_Learning_for_SR/Dataset/2_var/5000000/SSDNC

After running the above commands, your workspace should have the following structure:

symbolic_ws
│
└───NeuralSymbolicRegressionThatScales
│
└───Joint_Supervised_Learning_for_SR
│ 
└───Sym-Q

Environment Setup

Create and activate a Conda environment for Sym-Q:

conda create -n SymQ python=3.9
conda activate SymQ
pip install -e NeuralSymbolicRegressionThatScales/src/
pip install -r Sym-Q/requirements.txt
cd Joint_Supervised_Learning_for_SR
export PYTHONPATH="${PYTHONPATH}:$(pwd)"

Dataset Creation

Generate the dataset for training:

cd Joint_Supervised_Learning_for_SR/data_creation
python add_points_to_json.py
python gen_SSDNC_benchmark.py

Quick Start

Train the Sym-Q model:

cd Sym-Q
python train.py

This generates a folder /model/GENERATION_TIME with the model weights and evaluation data. Generate beamsearch results for evaluation:

python beam_search.py --weights_path PATH_TO_WEIGHTS --target ssdnc_r2

Run few-shot learning:

python few_shot.py --resume_path PATH_TO_WEIGHTS

Acknowledgment

Our implementation is mainly refers to the following codebases. We gratefully thank the authors for their wonderful works.

SymbolicGPT, DSO, NeSymReS, T-JSL

Citing this work

If you found our work useful and used code, please use the following citation:

@article{tian2024sym,
  title={Sym-Q: Adaptive Symbolic Regression via Sequential Decision-Making},
  author={Tian, Yuan and Zhou, Wenqi and Dong, Hao and Kammer, David S and Fink, Olga},
  journal={arXiv preprint arXiv:2402.05306},
  year={2024}
}

About

An official Pytorch implementation for the paper "Sym-Q: Adaptive Symbolic Regression via Sequential Decision-Making".

License:MIT License


Languages

Language:Python 100.0%