Introduction: Various decision-making systems collaborate to shape human behavior. Goal-directed and habitual systems are the two primary systems studied by reinforcement learning (RL), with model-based (MB) and model-free (MF) learning styles, respectively. Human behavior can be viewed as a combination of these two decision-making paradigms, achieved by the weighted sum of the action values of these two styles within an RL framework. The weighting parameter is often assessed using the maximum likelihood (ML) or maximum a posteriori (MAP) estimation method.
Methods: In this study, we employ RL agents that use a combination of MB and MF decision-making to perform the well-known Daw two-stage task. ML and MAP methods yield less reliable estimates of the weighting parameter, often exhibiting a large bias toward extreme values. We propose the knearest neighbor as an alternative nonparametric estimate to improve the estimation error, where we devise a set of 20 features extracted from the behavior of the RL agent. Simulated experiments examine the proposed method.
Results: Our method reduces the bias and variance of the estimation error, as demonstrated by the obtained results. Human behavior data from previous studies are also investigated. The proposed method enables the prediction of indices such as age, gender, IQ, dwell time of gaze, and psychiatric disorder indices, which are not captured by the traditional method.
Conclusion: In brief, the proposed method increases the reliability of the estimated parameters and enhances the applicability of RL paradigms in clinical trials.
نوع مطالعه:
Original |
موضوع مقاله:
Computational Neuroscience دریافت: 1402/7/8 | پذیرش: 1403/7/15 | انتشار: 1404/6/10