Mateus Hiro Nagata

Logo

M.Res. in Economics and Decision Sciences - HEC Paris

M.A. in Economic Theory - ITAM

B.A. in Economic Sciences - University of Brasilia

View My LinkedIn Profile

View my Google Scholar

View My GitHub Profile

Algorithmic Game Theory x Reinforcement Learning

Nash equilibrium (NE), a fundamental concept in strategic interaction presupposes a difficult epistemic condition: common knowledge of rationality and of payoff (or utility) functions. However, utility is not observable in cardinal terms. Moreover, NE pressuposes an entity that recommends a strategy for both players. But if no such entity exists, how can people arrive at a stable outcome? From a player’s perspective, one may never feel assured to be playing a Nash equilibrium due to ignorance of the opponent’s payoff.

The learning perspective in Game Theory adresses the gap. Algorithms like Experienced-Weighted Attraction and (stateless) Q-learning model suboptimal agents that iteratively refine their strategy to improve performance. Currently, I am investigating the behavior of these algorithms in relation to correlated equilibrium and the effect of noise in the payoffs.