A decoupled 3D agent mastering a custom OpenGL environment through Reinforcement Learning (RL) optimized with Prioritized Experience Replay and Polyak Averaging.
GL_Rexxy is an autonomous 3D agent designed to master a custom-rendered OpenGL environment using Reinforcement Learning. Unlike standard projects that rely on pre-existing simulations, Rexxy operates within a high-fidelity C++ world built from the ground up, allowing for granular control over physics, state representation, and reward mechanisms.
Decoupled Architecture: The system separates the Game Environment (C++/OpenGL) from the AI Brain (Python/Keras) via localhost sockets. This allows the agent to train in a high-speed Headless Mode, bypassing rendering bottlenecks to achieve faster convergence.
Neural Cognitive Engine: At its heart is a Deep Q-Network (DQN) that processes a dynamic state vector including obstacle distance, vertical height, and velocity to calculate optimal actions via the Bellman Equation.