Dynamic and hierarchical Reinforcement Learning in complex Decision Making
Every moment we make decisions according to the characteristics of the environment and the individual traits. The exposition to stimulus in function of the exposition allows the alternative to incorporate information and reduce uncertainty. Human beings have the ability to learn in complex environments, in the permanent search for the best possible results. We regard signs from our environment, which are exhibit in inherent values that can facilitate or hinder the acquisition of strategies to maximize our benefits. At the same time, we attribute values to our expectations and the feedback we want as consequence of our acts. It has been found that this type of learning is gradual and scalable, according to the individual’s exposure to judgments and evidence through reinforcement learning (RL). This theoretical approach is one of the most studied aspects of decision making, particularly through the electrical activity of our cerebral cortex. The more complex the environment, the more difficult the study of our decisions. When we find circumstances that require either more extensive behavioral routines or more variability, rewards delay longer and the best decision is less evident. Feedback processing needs to hierarchize the different options available, in order to obtain the optimal decision. The Goal of this project is to study the neural correlates of RL in complex scenarios. In concrete, we will study two main scenarios: situations in which there are multiple options that lead to complex decisions (Blackjack game); and situations in which sub-goals are required to get rewards (hierarchical learning).
Keywords: Reinforcement Learning, Decision Making, Feedback Processing.