

Episodes of Policy Gradient Agent (convergence episodes): 80.Mini-Batch Size (Sample size from all experience): 12800.Simplifications: Trained on only one character, limited number of actions, only one stadium used, only fights one character, fighting an easy bot.Input: Preprocessed screenshot (follow view of player with immediate surroundings).Jumps are performed by heavy attacks when in the air.Actions: Left, Right, Nothing, Left Heavy, Right Heavy, Left Light, Right light.However, progress was seen, suggesting that training the agent for a more extended period would achieve the goal in a much more viewable manner. Conclusion: The agent, on average, is better than the random agent, but not by much.Episodes of Policy Gradient Agent (convergence episodes): 180.Episodes of Random Agent (exploring): 5.Epochs (Number of complete gradient steps per episode): 1.Mini-Batch Size (Sample size from all experience): 3200.Conv(filter, kernel_size) = Conv2D(filter, kernel_size=kernel_size, strides=2) -> ReLu -> BatchNorm.Algorithm: Policy Gradients (No Critic).Simplifications: Trained on only one character, limited number of actions, only uses fist, only one stadium used, only fights one character, fighting an easy bot.Rewards: -time, dealt damage, -(recieved damage), enemey died, -(player died).Input: Preprocessed screenshot (fixed view of stadium).Actions: Left, Right, Left Heavy, Right Heavy, Left Light, Right light, Nothing.Goal: The agent will be able to 1v1 and perform better than a random agent.
BRAWLHALLA COMBOS LEARNING CODE
The code for some of the trials is published here, but the Brawhalla training environments are not published. All actions were performed with keypresses, and the states of the game were observed through screenshots. Also, no hacking of the game was performed, meaning the game's inner state was not touched (ex. This bot is solely created and published to show the potential of machine learning algorithms in a popular game. The purpose of this repository is to publish results achieved while attempting a machine learning solution of creating a bot in Brawlhalla.ĭisclaimer: This bot is not for any competitive contexts, where such a bot would be deemed cheating.
