generated from mwc/lab_tic_tac_toe
Checkpoint 3
As I worked through this lab and the video, I became more aware of the thought process that goes into tic-tac-toe. However, it was not until I saw all the reward possibilities for the initial state of the board, and then made the computer play itself that I appreciated how complex the game is and how many possibilities there are on a relatively simple board. I am also wondering when the computer plays itself, does the game always end in a tie? I suppose I could write a program for this that plays the game a set number of times and makes a list of the outcomes and then counts the wins, losses, and ties for a particular player.
This commit is contained in:
parent
2d470c3c89
commit
48dd6717e4
11
notes.md
11
notes.md
|
@ -30,10 +30,21 @@ and it's your turn, which action would you take? Why?
|
|||
---+---+--- ---+---+--- ---+---+--- ---+---+---
|
||||
| | | | O | | | |
|
||||
|
||||
View 1- Put x in Space 5, to win the game
|
||||
|
||||
View 2- Put x in Space 5, to precent the O's from winning
|
||||
|
||||
View 3- Put the x in Space 0, so there are then 2 possible ways to get 3 in a row on your next turn (space 3 or space 6)
|
||||
|
||||
View 4- Put the x in space 4, to block o from being able to get 3 in a row down the middle.
|
||||
|
||||
### Initial game state
|
||||
|
||||
You can get the inital game state using game.get_initial_state().
|
||||
What is the current and future reward for this state? What does this mean?
|
||||
|
||||
If state is set to game.get_initial_state(), there is a very large (but nonetheless finite), list of possible game outcomes that get printed out in the terminal. This means that there are many possibilites for either x to win, or o to win, or for the game to end in a tie.
|
||||
I'm not sure if there is a way to see which reward 0,1,or -1 is more likely. For example is there a way to see if the person who goes first has more paths to winning- I would think so, but it would be nice to know how much more likely!
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -2,8 +2,8 @@ from ttt.game import TTTGame
|
|||
from ttt.view import TTTView
|
||||
from ttt.player import TTTHumanPlayer, TTTComputerPlayer
|
||||
|
||||
player0 = TTTHumanPlayer("Player 1")
|
||||
player1 = TTTHumanPlayer("Player 2")
|
||||
player0 = TTTComputerPlayer("Player 1")
|
||||
player1 = TTTComputerPlayer("Player 2")
|
||||
game = TTTGame()
|
||||
view = TTTView(player0, player1)
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
from click import Choice, prompt
|
||||
from strategy.random_strategy import RandomStrategy
|
||||
from strategy.lookahead_strategy import LookaheadStrategy
|
||||
from ttt.game import TTTGame
|
||||
import random
|
||||
|
||||
|
@ -24,10 +24,10 @@ class TTTComputerPlayer:
|
|||
def __init__(self, name):
|
||||
"Sets up the player."
|
||||
self.name = name
|
||||
self.strategy = RandomStrategy(TTTGame())
|
||||
self.strategy = LookaheadStrategy(TTTGame(),deterministic=False)
|
||||
|
||||
def choose_action(self, state):
|
||||
"Chooses a random move from the moves available."
|
||||
"Chooses the best move from the moves available."
|
||||
action = self.strategy.choose_action(state)
|
||||
print(f"{self.name} chooses {action}.")
|
||||
return action
|
||||
|
|
Loading…
Reference in New Issue