I updated the notes and strategy for the Computer Player.

Checkpoint 3:
I think that human cognition is pretty similar to the computer learning rules,
but we don't often think about rules of cognition like we do with rules for
computer thinking, and our understanding of the brain is probably not as
complete as our understanding of computers. Writing a program like this is like
starting with a brain that hasn't had any previous inputs, but it's difficult to
imagine a brain that hasn't had any inputs. I also feel like memory is probably the
most unique thing in human cognition compared to the computer analog, because
memory formation is affected by emotion, which doesn't really have a computer
analog yet? Also the effect of aging and injury?
This commit is contained in:
root 2024-12-12 12:39:45 -05:00
parent 13743e56d3
commit 70fdcaac79
3 changed files with 12 additions and 3 deletions

View File

@ -45,9 +45,17 @@ and it's your turn, which action would you take? Why?
---+---+--- ---+---+--- ---+---+--- ---+---+---
| | | | O | | | |
### Initial game state
In the first state shown, I would put an X in spot 5 to get a winning combo.
In the second state shown, I would put an X in spot 5 to prevent 0 from winning next turn.
In the third state shown, I would put an X in spot 0 because it will make it possible to get a winning combo on X's next turn regardless of where O plays.
In the fourth state shown, I would put an X in spot 4 to have more possible ways to win. Half of the winning combos contain spot 4.
### Initial game state
You can get the inital game state using game.get_initial_state().
What is the current and future reward for this state? What does this mean?
The current and future reward for state = {"board": ['-','O','O','X','X','-','-','-','-'], "player_x": True} is 1. This means that X will win. X can play spot 5 and win.
The initial state is:
{'board': ['-', '-', '-', '-', '-', '-', '-', '-', '-'], 'player_x': True}
For the initial state, the current and future reward is 0. That means it is equally likely for either player to win or lose.

View File

@ -3,7 +3,7 @@ from ttt.view import TTTView
from ttt.player import TTTHumanPlayer, TTTComputerPlayer
player0 = TTTHumanPlayer("Player 1")
player1 = TTTHumanPlayer("Player 2")
player1 = TTTComputerPlayer("Computer")
game = TTTGame()
view = TTTView(player0, player1)

View File

@ -1,5 +1,6 @@
from click import Choice, prompt
from strategy.random_strategy import RandomStrategy
from strategy.lookahead_strategy import LookaheadStrategy
from ttt.game import TTTGame
import random
@ -24,7 +25,7 @@ class TTTComputerPlayer:
def __init__(self, name):
"Sets up the player."
self.name = name
self.strategy = RandomStrategy(TTTGame())
self.strategy = LookaheadStrategy(TTTGame(), deterministic=False)
def choose_action(self, state):
"Chooses a random move from the moves available."