Quoridor and Artificial Quoridor and Artificial IntelligenceIntelligence
Jeremy AlberthJeremy Alberth
QuoridorQuoridor
Quoridor is played on a 9x9 grid.Quoridor is played on a 9x9 grid. Starting positions are shown for two players.Starting positions are shown for two players.
QuoridorQuoridor
Red moves his pawn down. The objective for both Red moves his pawn down. The objective for both players is to be first to reach the opposite side.players is to be first to reach the opposite side.
QuoridorQuoridor
Blue moves his pawn up. Players may either Blue moves his pawn up. Players may either move their pawn or place a wall on a move.move their pawn or place a wall on a move.
QuoridorQuoridor
Red places a wall horizontally in front of blue’s pawn. Red places a wall horizontally in front of blue’s pawn. Walls must block movement from four squares.Walls must block movement from four squares.
QuoridorQuoridor
Blue moves his pawn left so that he is no longer Blue moves his pawn left so that he is no longer impeded on his journey upward.impeded on his journey upward.
QuoridorQuoridor
Blue places a wall vertically to the right of red’s Blue places a wall vertically to the right of red’s pawn. Wall orientations can be horizontal or vertical.pawn. Wall orientations can be horizontal or vertical.
QuoridorQuoridor
Red moves his pawn down.Red moves his pawn down.
QuoridorQuoridor
Blue places a wall horizontally in front of red’s Blue places a wall horizontally in front of red’s pawn. Players are limited to ten walls.pawn. Players are limited to ten walls.
QuoridorQuoridor
Red moves his pawn left, continuing on his Red moves his pawn left, continuing on his shortest path to his goal row.shortest path to his goal row.
QuoridorQuoridor
Blue places a wall to the left of red’s pawn, Blue places a wall to the left of red’s pawn, continuing his devious wall-placing behavior.continuing his devious wall-placing behavior.
QuoridorQuoridor
Blue eventually wins the game when he paths his Blue eventually wins the game when he paths his pawn to the opposite side of the board.pawn to the opposite side of the board.
My WorkMy Work
Created an implementation of Created an implementation of QuoridorQuoridor
Implemented AI players using the Implemented AI players using the minimax algorithmminimax algorithm
Modified minimax and AI strategiesModified minimax and AI strategies Analyzed performance of computer Analyzed performance of computer
players against one another and players against one another and against a random player against a random player
MinimaxMinimax
Minimax is a method Minimax is a method which finds the best which finds the best move by using move by using adversarial tree search.adversarial tree search. The game tree The game tree
represents every represents every possible move for both possible move for both players.players.
Branching factor is the Branching factor is the number of moves at number of moves at each step (here, each step (here, branching factor = 3)branching factor = 3)
Static EvaluationStatic Evaluation
In complex games, a depth limited In complex games, a depth limited search will be used.search will be used.
Upon reaching a depth cutoff, the Upon reaching a depth cutoff, the search will employ a static evaluation search will employ a static evaluation function.function.
This function must give a value to a This function must give a value to a game state, often revolving around a game state, often revolving around a board state and the player to move.board state and the player to move.
Managing the TreeManaging the Tree
Branching factor is initially 132.Branching factor is initially 132. 5 moves ahead: 132^5 = 40074642432 5 moves ahead: 132^5 = 40074642432
statesstates Minimax must be modified to make Minimax must be modified to make
use of a restricted move set.use of a restricted move set. The branching factor can be reduced to The branching factor can be reduced to
a manageable size of ~10.a manageable size of ~10. 5 moves ahead: 10^5 = 100000 states5 moves ahead: 10^5 = 100000 states
Wall SelectionWall Selection
Best strategy for shrinking the move Best strategy for shrinking the move set is reducing the number of walls set is reducing the number of walls considered.considered.
Use a heuristic to determine which.Use a heuristic to determine which. Walls close to or directly next to the Walls close to or directly next to the
opposing player are a way to prevent opposing player are a way to prevent an opponent’s quick victory.an opponent’s quick victory.
Might not consider wall placements by Might not consider wall placements by the opponent.the opponent.
ProblemProblem
ProblemProblem
ProblemProblem
SolutionSolution
Computer players may not consider Computer players may not consider wall placements by the opponent.wall placements by the opponent.
Considerations should be made for Considerations should be made for repeated states.repeated states. Minimax can avoid repeating game states Minimax can avoid repeating game states
by assigning undesirable values to them.by assigning undesirable values to them. The game can prevent this by forcing a The game can prevent this by forcing a
draw after a certain number of repeated draw after a certain number of repeated states.states.
Strategies and EvaluationsStrategies and Evaluations
Strategies for computer players were Strategies for computer players were reliant on their static evaluators.reliant on their static evaluators. [P] Shortest path: Considered shortest [P] Shortest path: Considered shortest
path values for both players.path values for both players. [B] Bird’s eye: Considered the distance to [B] Bird’s eye: Considered the distance to
the goal row without regard to walls.the goal row without regard to walls. [C] Close distance: Only one player’s path.[C] Close distance: Only one player’s path. [PR] Shortest path with random element[PR] Shortest path with random element [BR] Bird’s eye with random element[BR] Bird’s eye with random element
Do We Consider Opponent’s Do We Consider Opponent’s Wall Placement?Wall Placement?
No.No.
P, wallP, wall B, wallB, wall C, wallC, wall PR,walPR,walll
BR,waBR,wallll
P, noP, no 183183 199199 167167 195195 199199
B, noB, no 2121 117117 7474 147147 159159
C, noC, no 133133 172172 198198 187187 196196
PR, noPR, no 2323 8282 7474 136136 133133
BR, noBR, no 1414 5353 7272 137137 120120
AI EffectivenessAI Effectiveness
AI OutcomesAI Outcomes
Strategies with Strategies with random elements random elements were the worst, were the worst, followed by the followed by the bird’s eye strategy.bird’s eye strategy.
Shortest path and Shortest path and “close distance” “close distance” strategies strategies outperformed the outperformed the others.others.
PP BB CC PRPR BRBR
PP 4848 9988
5555 9988
9999
BB 00 4242 3131 6767 7979
CC 4343 8888 4747 9898 9797
PRPR 11 3232 00 4848 5555
BRBR 00 1515 33 3939 4242
Data TrendsData Trends
AIs using wall heuristic not successfulAIs using wall heuristic not successful Considered walls that were not usefulConsidered walls that were not useful
Repeated state flag generated more non-Repeated state flag generated more non-draw outcomesdraw outcomes
Shortest path was the most effectiveShortest path was the most effective Players not considering opponent’s walls were Players not considering opponent’s walls were
able to path more successfullyable to path more successfully Randomness added variation but often Randomness added variation but often
removed effectivenessremoved effectiveness
ReferencesReferences
Abramson, B. 1989. Control strategies for two-player Abramson, B. 1989. Control strategies for two-player games. games. ACM Comput. Surv.ACM Comput. Surv. 21, 2 (Jun. 1989), 137-161. 21, 2 (Jun. 1989), 137-161. DOI= DOI= http://doi.acm.org/10.1145/66443.66444http://doi.acm.org/10.1145/66443.66444
Thuente, D. J. and Jones, R. P. 1991. Beyond minimaxing Thuente, D. J. and Jones, R. P. 1991. Beyond minimaxing for games with draws. In for games with draws. In Proceedings of the 19th Annual Proceedings of the 19th Annual Conference on Computer ScienceConference on Computer Science (San Antonio, Texas, (San Antonio, Texas, United States). CSC '91. ACM Press, New York, NY, 551-557. United States). CSC '91. ACM Press, New York, NY, 551-557. DOI= DOI= http://doi.acm.org/10.1145/327164.328771http://doi.acm.org/10.1145/327164.328771
Slagle, J. R. and Dixon, J. E. 1969. Experiments With Some Slagle, J. R. and Dixon, J. E. 1969. Experiments With Some Programs That Search Game Trees. Programs That Search Game Trees. J. ACMJ. ACM 16, 2 (Apr. 16, 2 (Apr. 1969), 189-207. DOI= 1969), 189-207. DOI= http://doi.acm.org/10.1145/321510.321511 http://doi.acm.org/10.1145/321510.321511
Previous Quoridor Software Previous Quoridor Software WorkWork
Xoridor (Java Quoridor Project)Xoridor (Java Quoridor Project) Glendenning: Genetic algorithms Glendenning: Genetic algorithms
researchresearch Mertenz: AI ComparisonsMertenz: AI Comparisons
Used different board representation, Used different board representation, strategies, evaluations, and random strategies, evaluations, and random elementselements