TensorAeroSpace is a set of control objects, OpenAI Gym simulation environments, and Reinforcement Learning (RL) algorithm implementations.
Quick installation
git clone https://github.com/tensoraerospace/tensoraerospace.git
poetry install
Launching a Docker image
docker build -t tensor_aero_space . --platform=linux/amd64
docker run -v example:/app/example -p 8888:8888 -it tensor_aero_space
All examples for launching and working with the TensorAeroSpace library are located in the ./example
folder.
TensorAeroSpace contains such control algorithms and RL algorithms as:
Name | Export to HuggingFace |
---|---|
IHDP (Incremental Heuristic Dynamic Programming) | ❌ |
DQN (Deep Q Learning) | ❌ |
SAC (Soft Actor Critic) | ✅ |
A3C (Asynchronous Advantage Actor-Critic) | ❌ |
PPO (Proximal Policy Optimization ) | ✅ |
MPC (Model Predictive Control) | ✅ |
A2C (Advantage Actor-Critic) with NARX Critic | ❌ |
A2C (Advantage Actor-Critic) | ✅ |
PID (proportional–integral–derivative controller) | ✅ |
- General Dynamics F-16 Fighting Falcon
- Boeing-747
- ELV (Expendable Launch Vehicle)
- Rocket model
- McDonnell Douglas F-4C
- North American X-15
- Geostationary satellite
- Communication satellite
- LAPAN Surveillance Aircraft (LSU)-05 UAV
- Ultrastick-25e UAV
- UAV in State Space
- UAV in Unity environment
TensorAeroSpace is capable of working with the ML-Agents system.
An example environment for launching can be found in the repository UnityAirplaneEnvironment
The documentation includes examples on setting up the network and working with the DQN agent.
TensorAeroSpace contains examples of working with Simulink models.
The documentation provides examples on assembling and compiling the Simulink model into o perational code that can be implemented in the OpenAI Gym simulation environment.
TensorAeroSpace includes control objects implemented as state space matrices.