Improved Counterfactual Regret Minimization with Time-Series Differential Learning for Incomplete Information Games
Abstract
The traditional strategy recommendation algorithm for incomplete information game problems has low computational efficiency and insufficient quality of recommendation strategies. Therefore, the Counterfactual Regret Minimization (CFR) algorithm is designed, which introduces time-series differential learning to solve incomplete information game problems to adjust strategies faster, reduce oscillations in the strategy update process, and accelerate convergence speed. Combined with the decision judgment model biased towards opponent information, it is improved by updating the feature vectors in real time, which dynamically adjusts the strategy to adapt to changes in opponent strategy, thus obtaining an improved CFR algorithm. The study used data collected from the Texas Hold'em Robot Contest organized by the International Association for Artificial Intelligence from 2010 to 2016 for testing. The experimental results showed that after 20,000 games, the average return of ICFR-OG was 3.18, significantly higher than that of other mainstream algorithms, namely VGG32, Faster RCNN, CFR, and XGBoost, with average returns of -1.73, 0.24, 0.69, and 2.35, respectively. The cumulative calculation time of the research method was only 1,967ms. ICFR-OG demonstrated the lowest computational time, while CFR exhibited the highest. The results are useful for improving the performance of Texas Hold'em educational games and improving the ability to deal with various incomplete information games.
Full Text:
PDFDOI: https://doi.org/10.31449/inf.v49i5.8305

This work is licensed under a Creative Commons Attribution 3.0 License.