Playing Battleship with Bayesian Search Theory, Thompson Sampling, and Approximate Bayesian Computation
As a child, I spent many hours playing Battleship against my Mom or brother to pass the time.
Battleship is a classic game of incomplete information that involves placing ships of different lengths on a square grid (\(10 \times 10\) in the most popular version) and trying to guess the location of all of the opponent’s ships (sinking them) before the opponent does the same.
Many years after I stopped playing Battleship with my family, I developed an interest in Bayesian search theory after learning about a firm that specializes in search consulting for the US government at jobs-in-industry panel in graduate school. I had already happily lined up a job in e-commerce optimization, but the idea of using statistics to guide the potentially expensive searches piqued my intellectual curiosity. I e-mailed the presenter afterward for more information and he recommended [Theory of Optimal Search]^(Stone, Lawrence D. Theory of optimal search. Elsevier, 1976.) as the sessential reference in the field. The book is interesting if a bit dry, it is about mathematical optimization after all, and I was always curious about the connection with the search theory it develops and Battleship, which has always stood out in my mind as a basic search problem. Though most of the theory in this book is not directly applicable to Battleship for various reasons, a continuing theme is allocating search in areas with the largest posterior probability of containing the target object given the search results so far. A 1985 result of [Assaf and Zamir]^(Assaf, David, and Shmuel Zamir. “Optimal sequential search: a Bayesian approach.” The Annals of Statistics 13, no. 3 (1985): 1213-1221.) shows the optimality of this strategy in a situation much closer to that of Battleship.
Inspired by these forays into search theory I have thought idly for many years about constructing a near-optimal Bayesian approach to playing Battleship. This post is the culmination of these thoughts, showing how to use ideas from Bayesian search theory, approximate Bayesian computation (ABC), and Thompson sampling to construct an easily tractable, near optimal strategy for Battleship that only requires simulating random Battleship boards and not reasoning through hard-coded special cases to achieve strong play.
Playing Battleship
First we make the necessary Python imports and do a bit of housekeeping.
%matplotlib inline
from abc import ABC, abstractmethod
import datetime
from IPython.display import HTML
from itertools import product
import multiprocessing as mp
from tqdm import tqdm, trange
from empiricaldist import Pmf
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.animation import FuncAnimation
from matplotlib.colors import LinearSegmentedColormap
from matplotlib.ticker import IndexLocator, StrMethodFormatter
import numpy as np
from scipy import linalg, stats
import seaborn as sns
= 8
FIG_WIDTH = 6
FIG_HEIGHT 'figure.figsize'] = (FIG_WIDTH, FIG_HEIGHT)
mpl.rcParams[
= LinearSegmentedColormap.from_list(
CMAP 'battleship',
[0.0, 'blue'),
(0.2, 'orange'),
(1., 'red')
(
]
)'w')
CMAP.set_bad(
set(color_codes=True)
sns.
= StrMethodFormatter('{x:.1%}') pct_formatter
= 123456789 # for reproducibility
SEED
= np.random.default_rng(SEED) rng
We represent a Battleship board as a sequence of rectangular grids, one per ship. For standard Battleship, that means a game consists of five \(10 \times 10\) grids, the first grid containing a ship of length five, the seconding containing a ship of length four, and so on.
The following constants are useful for defining a standard game of Battleship. (Note that we are using Hasbro’s names for the ships, revised in 2002.)
= 10
GRID_LENGTH = [
SHIP_NAMES "Carrier",
"Battleship",
"Destroyer",
"Submarine",
"Patrol boat"
]= np.array([
SHIP_SIZES 5,
4,
3,
3,
2
])
= SHIP_SIZES.size n_ships
The following plots show how a Battleship board is represented as a sequence of five grids, one per ship.
= np.zeros((n_ships, GRID_LENGTH, GRID_LENGTH))
ships 0, 0, :5] = 1
ships[1, 4:8, 7] = 1
ships[2, 9, 3:6] = 1
ships[3, 3:6, 1] = 1
ships[4, 5, 8:10] = 1 ships[
def plot_board(board, cmap=CMAP,
=False, cbar_kwargs=None,
cbar=None, **heatmap_kwargs):
axif ax is None:
= plt.subplots(figsize=(FIG_WIDTH, FIG_WIDTH))
_, ax
'vmin', 0)
heatmap_kwargs.setdefault('vmax', 1)
heatmap_kwargs.setdefault(
= np.atleast_2d(board)
board_ = board_.mask if np.ma.is_masked(board_) else None
mask
if cbar_kwargs is None:
= {}
cbar_kwargs
'format', pct_formatter)
cbar_kwargs.setdefault('label', "Posterior probability of a hit")
cbar_kwargs.setdefault(
=mask, cmap=cmap,
sns.heatmap(board_, mask=cbar, square=True,
cbar=1.5, linecolor='k',
linewidths=cbar_kwargs,
cbar_kws=ax, **heatmap_kwargs)
ax
if board_.shape[0] == 1:
ax.set_yticklabels([])
if cbar:
= ax.figure.axes[-1]
cbar =270)
cbar.set_ylabel(cbar.get_ylabel(), rotation
return ax
= plt.subplots(ncols=n_ships, sharex=True, sharey=True,
fig, axes =(n_ships * 2.5, 2.5))
figsize
for (ship, name, ax) in zip(ships, SHIP_NAMES, axes):
=ax);
plot_board(ship, ax
;
ax.set_xticklabels([]);
ax.set_yticklabels([]);
ax.set_title(name)
; fig.tight_layout()
Adding these ship grids gives us the two-dimensional board familiar to anyone who has played battleship.
def to_board(ships, ship_axis=0):
return ships.sum(axis=ship_axis)
; plot_board(to_board(ships))
The following class implements most of the basics necessary to track the state of a game of Battleship, make guesses, and reveal the contents of guessed spaces.
class Battleship:
def __init__(self, ships):
self._ships = ships
self._turn_revealed = []
@property
def _board(self):
return to_board(self._ships)
@property
def grid_length(self):
return self._board.shape[0]
@property
def is_solved(self):
return self.revealed.sum() == self._board.sum()
@property
def revealed(self):
return to_board(self._revealed_ships)
@property
def _revealed_ships(self):
if self.turns > 0:
return self._turn_revealed[-1]
else:
return np.ma.masked_all_like(self._ships)
@property
def ship_sizes(self):
return self._ships.sum(axis=(1, 2))
@property
def sunk(self):
= self._ships.sum(axis=(1, 2))
ship_sizes = (self._revealed_ships
revealed_sizes sum(axis=(1, 2))
.0))
.filled(
return ship_sizes == revealed_sizes
@property
def turn_revealed(self):
return [np.ma.masked_all_like(self._board)] \
+ [to_board(revealed) for revealed in self._turn_revealed]
@property
def turns(self):
return len(self._turn_revealed)
These bookkeeping methods are fairly self-explanatory. An instance of Battleship
tracks which squares have been revealed to the opponent using numpy
’s masked arrays. The revealed
grid is masked wherever the opponent has not yet guessed.
The heart of our Battleship
class is the guess
method, which takes the coordinates to guess, and returns a tuple containing whether or not the guess resulted in a hit or miss, and the index of the ship sunk as a result of that guess (None
if no ship was sunk). (Here we use a dirty trick to add a method to the Battleship
class after it has already been defined for expository clarity. Don’t do this in practice!)
class Battleship(Battleship):
def guess(self, i, j):
if not self.revealed.mask[i, j]:
raise ValueError(f"{i}, {j} already guessed")
else:
= self.sunk
prev_sunk
= self._revealed_ships.copy()
next_ships = self._ships[:, i, j]
next_ships[:, i, j] self._turn_revealed.append(next_ships)
= self.sunk
curr_sunk
if (curr_sunk == prev_sunk).all():
= None
sunk else:
= (curr_sunk & ~prev_sunk).argmax()
sunk
return self._board[i, j], sunk
In addition to this class representing game state and history, we define an abstract class Strategy
that generates the guesses necessary to play a game.
class Strategy(ABC):
@abstractmethod
def next_guess(self, revealed):
pass
def reveal(self, i, j, hit_or_miss, sunk):
pass
The next_guess
method takes a masked array of the board as revealed so far and should return the coordinates of the next spot to guess. The reveal
method takes guessed coordinates, an indicator of whether the guess resulted in a hit or miss, and the index of the ship sunk as a result of that guess (or None
, as with guess
above). This method allows a Strategy
to update internal state based on the result of guesses. While some simple strategies can infer the next guess purely based on the state of the revealed board, we will see that it is useful when taking a Bayesian approach to maintain state inside a strategy.
The play
function take a configuration of ships and a Strategy
and plays the corresponding game of Battleship.
def play(ships, strategy, progress_bar=False):
= Battleship(ships)
game
if progress_bar:
= tqdm(total=int(ships.sum()))
pbar
while not game.is_solved:
= strategy.next_guess(game.revealed)
i, j = game.guess(i, j)
hit_or_miss, sunk
strategy.reveal(i, j, hit_or_miss, sunk)
if progress_bar and hit_or_miss == 1:
pbar.update()
if progress_bar:
pbar.close()
return game
The rest of this post progresses from strategies for simplified versions of Battleship to finally showing how to use ABC and Thompson sampling to achieve near optimal play.
Manual guessing
This framework allows us to manually play Battleship by inputting user guesses and watching how the board evolves.
class ManualStrategy(Strategy):
def next_guess(self, revealed):
= plot_board(revealed)
ax "Currently revealed")
ax.set_title(
plt.show()
= int(input("Enter row to guess: "))
i = int(input("Enter column to guess: "))
j
return i, j
try:
play(ships, ManualStrategy())except KeyboardInterrupt:
print("Game ended")
Enter row to guess: 5
Enter column to guess: 4
Enter row to guess: 0
Enter column to guess: 3
Enter row to guess: 1
Enter column to guess: 3
Enter row to guess: 0
Enter column to guess: 2
Enter row to guess: 0
Enter column to guess: 4
Game ended
This strategy is not really useful for our purposes, but it provides a good illustration of how the game is usually played. The white squares here indicate the cells whose contents are unknown because we have not yet guessed them.
Row with one ship
We begin by simplifying the idea of Battleship to the most basic possible setting of a single row containing a single ship.
The following function generates all possible rows of a given length containing a ship of a given size.
def get_all_ship_rows(grid_length, ship_size):
= np.indices((grid_length, grid_length))[:, :-ship_size + 1]
i, j
return 1 * (i <= j) & (j < i + ship_size)
For a row of length three and a ship of size two (3/2), there are only two such rows.
= get_all_ship_rows(3, 2) all_rows_3_2
= plot_board(all_rows_3_2)
ax "All rows 3/2"); ax.set_title(
For a row of length ten and a ship of size four (10/4), there are seven such rows.
= get_all_ship_rows(10, 4) all_rows_10_4
= plot_board(all_rows_10_4)
ax "All rows 10/4"); ax.set_title(
We will compare our Bayesian strategies to several benchmark strategies, starting with the simplest possible strategy: random guessing.
Random guessing
The random guessing approach to Battleship is equivalent to the following urn problem from classic probability theory. Imagine an urn containing 100 balls (corresponding to cells on the board), 17 of which are red (corresponding to cells covered by ships), and 83 of which are blue. The number of turns required to solve Battleship through random guessing has the same distribution as the number of balls of any color that are drawn before all 17 red balls have been drawn. It is well known (in the right circles, at least) that this corresponds to the negative hypergeometric distribution.
The following function returns the probability mass function (pmf) of the appropriate negative hypergeometric distribution given a grid and ship size.
def get_random_guess_dist(grid_size, ship_size, n_hit=None):
if n_hit is None:
= ship_size
n_hit
= np.arange(ship_size, grid_size + 1)
support = stats.nhypergeom(grid_size,
nhg - ship_size,
grid_size
n_hit)
return Pmf(nhg.pmf(support - ship_size), support)
We see, unsurprisingly, that random guessing will usually take near the maximum number of turns to solve the game.
= get_random_guess_dist(GRID_LENGTH**2, SHIP_SIZES.sum()) std_random_pmf
def plot_turn_dist(pmf, kind='bar', mean=False, ax=None, mean_kwargs=None, **kwargs):
if ax is None:
= plt.subplots()
_, ax
if kind == 'bar':
# pandas barplot uses odd indexing, making it hard to
# mix bar and line plots if we use pandas's versions
'width', 1)
kwargs.setdefault('alpha', 0.75)
kwargs.setdefault(
**kwargs)
ax.bar(pmf.index, pmf, elif kind == 'line':
=ax, **kwargs)
pmf.plot(axelse:
raise ValueError("kind must be one of 'bar' or 'line'")
if mean:
if mean_kwargs is None:
= {}
mean_kwargs
'ls', '--')
mean_kwargs.setdefault(
**mean_kwargs);
ax.axvline(pmf.mean(),
False)
ax.xaxis.grid("Turns")
ax.set_xlabel(
return ax
def make_pct_yaxis(ax):
ax.yaxis.set_major_formatter(pct_formatter)"Probability")
ax.set_ylabel(
return ax
= plot_turn_dist(std_random_pmf, kind='line', mean=True)
std_turn_ax
;
make_pct_yaxis(std_turn_ax)"Random guessing\nStandard Battleship"); std_turn_ax.set_title(
The dashed vertical line shows the expected number of turns to solve Battleship by randomly guessing,
std_random_pmf.mean()
95.38888888888883
Returning the the case of a row with a single ship, we visualize the turn distribution of random guessin strategies for various row lengths and a ship of size two.
= [4, 5, 8, 10] RANDOM_ROW_LENGTHS
= plt.subplots(nrows=len(RANDOM_ROW_LENGTHS) // 2, ncols=2,
fig, axes =True,
sharey=(FIG_WIDTH, 1.25 * FIG_HEIGHT))
figsize
for (row_length, ax) in zip(RANDOM_ROW_LENGTHS, axes.flat):
= get_random_guess_dist(row_length, 2)
pmf =ax);
plot_turn_dist(pmf, ax
;
make_pct_yaxis(ax)f"{row_length}/2");
ax.set_title(
"Random guessing");
fig.suptitle(; fig.tight_layout()
We do the same for a ship of size three.
= plt.subplots(nrows=len(RANDOM_ROW_LENGTHS) // 2, ncols=2,
fig, axes =True,
sharey=(FIG_WIDTH, 1.25 * FIG_HEIGHT))
figsize
for (row_length, ax) in zip(RANDOM_ROW_LENGTHS, axes.flat):
= get_random_guess_dist(row_length, 3)
pmf =ax);
plot_turn_dist(pmf, ax
;
make_pct_yaxis(ax)f"{row_length}/3");
ax.set_title(
"Random guessing");
fig.suptitle(; fig.tight_layout()
As with the case of standard Battleship, the most likely outcome is to use all the turns, and the expected value will be slightly below that.
Optimal search
In the simple setting of a row with a single ship, it is not too hard to work out the optimal search strategy, which consists of two phases. First we must locate the ship by getting at least one hit. Once the ship has been located, we must sink it by attacking spaces next to known hits until all of the ship has been found.
After at least one hit
We first implement optimal search after at least one hit has occurred. In this case, we guess the spot immediately to the left of the leftmost hit, unless that hit is in the first cell, or the cell to its left has already been shown to contain water. In those two cases, we guess the cell to the right of the rightmost hit until all cells containing the ship have been hit. The following function implements this strategy.
def next_guess_1d_with_hit(row):
= row.argmax()
first
# found the left edge, fill out the ship to the right
if first == 0 or row[first - 1] == 0:
return first + row[first:].mask.argmax()
# find the left edge
else:
return first - 1
We can show how this strategy works through a simple animated example. Suppose the position of a length three ship on a length ten row is as shown below.
= np.zeros(10)
board 6:9] = 1 board[
; plot_board(board)
Also suppose that after three guesses, the following spots have been revealed.
= np.ma.masked_all_like(board)
revealed 1, 4, 7]] = board[[1, 4, 7]] revealed[[
; plot_board(revealed)
The above strategy will take three guesses to sink the ship, as shown in the animation below.
= [revealed]
turn_revealed
= next_guess_1d_with_hit(revealed)
next_j = revealed.copy()
next_revealed = board[next_j]
next_revealed[next_j]
turn_revealed.append(next_revealed)= next_revealed
revealed
= next_guess_1d_with_hit(revealed)
next_j = revealed.copy()
next_revealed = board[next_j]
next_revealed[next_j]
turn_revealed.append(next_revealed)= next_revealed
revealed
= next_guess_1d_with_hit(revealed)
next_j = revealed.copy()
next_revealed = board[next_j]
next_revealed[next_j]
turn_revealed.append(next_revealed)= next_revealed revealed
def animate_boards(boards,
=CMAP, cbar=False, ax=None,
cmap=None, **ani_kwargs):
heatmap_kwargsif ax is None:
= plt.subplots(figsize=(FIG_WIDTH, FIG_WIDTH))
fig, ax else:
= ax.figure
fig
if heatmap_kwargs is None:
= {}
heatmap_kwargs
0],
plot_board(boards[=cmap, cbar=cbar, ax=ax,
cmap**heatmap_kwargs)
*_ = ax.get_children()
quadmesh,
def ani_func(i):
quadmesh.set_array(boards[i])
return quadmesh,
'blit', True)
ani_kwargs.setdefault('frames', len(boards))
ani_kwargs.setdefault(
return FuncAnimation(fig, ani_func, **ani_kwargs)
%%capture
= animate_boards(turn_revealed, interval=300) ani
HTML(ani.to_html5_video())
Getting the first hit
Immediately it is clear that we only have to search one of the conrguence classes of the cell index mod the ship size to guarantee a hit. In the following plot of a row of length ten, each congruence class mod three is colored differently.
10) % 3, cmap='plasma', vmin=0, vmax=2); plot_board(np.arange(
A moment’s consideration shows that the congruence class of -1 modulo the ship size will always have the fewest elements of any of the congruence class, so we choose to search along these cells. The following plot shows the cells to be searched until a hit is found in red.
def get_search_cells_1d(grid_length, ship_size):
= np.arange(grid_length)
i
return i[i % ship_size == -1 % ship_size]
= np.zeros(10)
row 10, 3)] = 1 row[get_search_cells_1d(
; plot_board(row)
The following subclass of Strategy
implements the optimal strategy of searching along this grid until a hit is found, then greedily filling out the ship.
class Single1DOptimalStrategy(Strategy):
def __init__(self, grid_length, ship_size):
self._grid_length = grid_length
self._ship_size = ship_size
def next_guess(self, revealed):
if revealed.mask.all() or revealed.sum() == 0:
= get_search_cells_1d(self._grid_length,
search_j self._ship_size)
= search_j[revealed.mask[0, search_j].argmax()]
next_j else:
return 0, next_guess_1d_with_hit(revealed[0])
return 0, next_j
We now use this strategy to play all three games with a length three ship on a length five row.
= get_all_ship_rows(5, 3) all_rows_5_3
= [
opt_row_games_5_3
play(ship_row[np.newaxis, np.newaxis],5, 3))
Single1DOptimalStrategy(for ship_row in all_rows_5_3
]= Pmf.from_seq([game.turns for game in opt_row_games_5_3]) opt_row_turns_5_3
= plot_turn_dist(opt_row_turns_5_3)
ax
;
make_pct_yaxis(ax)"Optimal play\nShip size 3, row length 5"); ax.set_title(
Two of the games take four turns and one takes three turns. Notably, no games take five turns. This is because we know that the third cell is a guaranteed hit in this configuration.
We can repeat this exercise for a row of length ten.
= get_all_ship_rows(10, 3) all_rows_10_3
= [
opt_row_games_10_3
play(ship_row[np.newaxis, np.newaxis],10, 3))
Single1DOptimalStrategy(for ship_row in all_rows_10_3
]= Pmf.from_seq([game.turns for game in opt_row_games_10_3]) opt_row_turns_pmf_10_3
= plot_turn_dist(opt_row_turns_pmf_10_3)
ax
;
make_pct_yaxis(ax)"Optimal play\nShip size 3, row length 10"); ax.set_title(
We see that no games take more than six turns with this configuration. Since the equivalence class of -1 modulo 3 contains three members (2, 5, and 8), we are guaranteed to get our first hit in at most three guesses, and once we have a first hit it will take at most three guesses to sink the ship (two hits and potentially one miss if the first hit was not on the rightmost cell of the ship).
Bayesian (Thompson sampling)
The above thought exercise is interesting, but it involves several domain-specific insights (the optimal search grid, how to optimally sink a ship that has been located) that do not generalize immediately to two-dimensional grids that allow ships to be oriented in either rows or columns. (Note that if we restrict our ship in an \(m \times n\) grid to be oriented along a row, we’re essentially playing a one-dimensional game on a row of length \(m \cdot n\).) In this section we will show how to reproduce the above results using a Bayesian approach to this game of simplified Battleship. This approach will generalize to a near-optimal strategy for standard Battleship with many fewer hard coded rules than the above optimal one-dimensional strategy requires to generalize.
We begin by enumerating all possible boards for a given row/ship configuration (in this case 5/3).
= plot_board(all_rows_5_3)
ax ; ax.set_yticklabels([])
We consider this set of ships as a uniform prior distribution on the set of possible board layouts. With this perspective, when we take the average in each columns, we get the prior probability that each cell contains a ship.
We already see that this Bayesian approach thhat we can conclude that the third cell must contain a ship without doing any modular arithmetic.
For a configuration with a longer row (10/3), we see that the middle six cells have the higest prior probability of containing a ship. This makes sense as there are three configurations in which each of these cells contains a ship, but only one configuration in which the first cell contains a ship, and similarly for the other cells near the edges.
= get_all_ship_rows(10, 3) all_rows_10_3
=0), cbar=True); plot_board(all_rows_10_3.mean(axis
Looking at these probabilities, a resonable strategy would seem to be to guess the next cell with the highest probability of containing a hit. In both the 5/3 and 10/3 cases, this would result in guessing the third cell next. (By convention when many cells have the same hit probability, we will choose the one wit smallest coordinates.)
Let’s focus on the 5/3 configuration for the moment and return to 10/3 after. Suppose the true 5/3 board is as follows.
= np.zeros(5)
board_5_3 1:4] = 1 board_5_3[
; plot_board(board_5_3)
Guessing the third cell must result in a hit in the 5/3 configuration, so after one guess the following board has been revealed.
= np.ma.masked_all_like(board_5_3)
revealed_5_3 2] = board_5_3[2] revealed_5_3[
; plot_board(revealed_5_3)
With this information, what should our next guess be? For the first guess we chose the cell with the highest prior probability of yielding a hit, so it seems intuitive that we would choose the cell with the highest posterior probability of yielding a hit, given the observation from our first guess.
We can get this posterior distribution by eliminating the boards that do not match the cells that have been revealed to us.
def is_compat(ships, revealed, board_axis=(-2, -1)):
return (ships == revealed).all(axis=board_axis)
= all_rows_5_3[
compat_5_3 =-1)
is_compat(all_rows_5_3, revealed_5_3, board_axis ]
= plot_board(compat_5_3)
ax ; ax.set_yticklabels([])
Since the third cell is guaranteed to be a hit in the 5/3 configuration, all ship layouts are compatible with the result of our first guess, and the posterior is the same as the prior restricted to unknown cells. With this restriction our posterior is as follows.
plot_board(=0),
np.ma.masked_array(compat_5_3.mean(axis=~revealed_5_3.mask),
mask=True
cbar; )
We see that out of all the unknown cells, the second and third are tied for the highest posterior probability of being a hit. By our convention, we will choose the second cell as the next guess, which we know will reveal a hit.
1] = board_5_3[1] revealed_5_3[
; plot_board(revealed_5_3)
With this observations we are able to eliminate one of the potential boards.
= all_rows_5_3[
compat_5_3 =-1)
is_compat(all_rows_5_3, revealed_5_3, board_axis ]
= plot_board(compat_5_3)
ax ; ax.set_yticklabels([])
Combining these two compatible boards to form the posterior distribution, we get the following.
= plot_board(
ax =0),
np.ma.masked_array(compat_5_3.mean(axis=~revealed_5_3.mask),
mask=True
cbar
); ax.set_yticklabels([])
We see that it is impossible for the fourth cell to contain a hit, and there’s a 50% chance that the first and fourth cells will be a hit. By our convention above, we guess the first cell, resulting in a miss.
0] = board_5_3[0] revealed_5_3[
; plot_board(revealed_5_3)
Finally we see that there is only one board compatible with the information that has been revealed, so we guess the fourth cell to end the game.
= all_rows_5_3[
compat_5_3 =-1)
is_compat(all_rows_5_3, revealed_5_3, board_axis ]
= plot_board(compat_5_3)
ax ; ax.set_yticklabels([])
We now return to the 10/3 case. Suppose we are working with the following board.
= np.zeros(10)
board_10_3 5:8] = 1 board_10_3[
; plot_board(board_10_3)
Recalling the prior distribution on hits in the 10/3 configuration, we see first guess the third cell, revealing a miss.
=0), cbar=True); plot_board(all_rows_10_3.mean(axis
= np.ma.masked_all_like(board_10_3)
revealed_10_3 2] = board_10_3[2] revealed_10_3[
; plot_board(revealed_10_3)
Excluding boards that are incompatible with a miss in the third cell, we see that we have already ruled out a ship in the first two cells as well.
= all_rows_10_3[
compat_10_3 =-1)
is_compat(all_rows_10_3, revealed_10_3, board_axis ]
= plot_board(compat_10_3)
ax ; ax.set_yticklabels([])
Unsurprisingly, the sixth through eight cells are tied for the highest probability of yielding a hit, so we guess the six cell, revealing a hit.
=0), cbar=True); plot_board(compat_10_3.mean(axis
5] = board_10_3[5] revealed_10_3[
; plot_board(revealed_10_3)
Hitting the rest of the ship will proceed similarly to the 5/3 case, so we stop playing here.
This approach of calculating the posterior distribution by enumerating the possible grids and eliminating those that are incompatible with the observed hits and misses is a very simple type of approximate Bayesian computation (ABC). ABC is a fascinating and deep field of research in its own right, and we are really only scratching its surface here.
Choosing to guess the cell with the maximum a posteriori (MAP) is a simplified (greedy) form of Thompson sampling, a Bayesian approach to playing games of incomplete information that chooses the next action according to the probability that it maximizes the expected reward. In the simple situation of a single ship (in both one and two dimensions) we can calculate the cell(s) that has the MAP probability chance of yielding a hit (the expected reward) exactly, so this strategy becomes greedy. (I have a soft spot for Thompsons sampling in general as we use it in a number of our machine-learning based products at Kibo, where I work. For a discussion of the applications of Thompson sampling to e-commerce optimization, take a look at my talks.)
def argmax_2d(x):
= x.max(axis=1).argmax()
max_i = x[max_i].argmax()
max_j
return max_i, max_j
class SingleThompsonStrategy(Strategy):
def __init__(self, poss_ships):
self._poss_ships = poss_ships
def compat_ships(self, revealed):
if revealed.mask.all():
return self._poss_ships
else:
= is_compat(self._poss_ships, revealed)
is_compat_
return self._poss_ships[is_compat_]
def next_guess(self, revealed):
= np.ma.masked_array(
post self.compat_ships(revealed).mean(axis=0),
=~revealed.mask
mask
)
return argmax_2d(post)
We watch this strategy play the game we started above.
= SingleThompsonStrategy(all_rows_10_3[:, np.newaxis, :])
ts = play(board_10_3[np.newaxis, np.newaxis, :], ts,
game_10_3 =True) progress_bar
100%|██████████| 3/3 [00:00<00:00, 391.81it/s]
%%capture
= animate_boards(game_10_3.turn_revealed) ani
HTML(ani.to_html5_video())
As with our optimal row strategy, we can play every possible 10/3 game with the Thompson sampling strategy and compare the distributions of the number of turns.
= [
ts_row_games_10_3
play(row[np.newaxis, np.newaxis],
SingleThompsonStrategy(all_rows_10_3[:, np.newaxis, :]))for row in all_rows_10_3
]= Pmf.from_seq([
ts_row_turns_pmf_10_3 for game in ts_row_games_10_3
game.turns ])
= plt.subplots(nrows=2, sharex=True, sharey=True,
fig, (opt_ax, ts_ax) =(FIG_WIDTH, 2 * FIG_HEIGHT))
figsize
='C0', ax=opt_ax);
plot_turn_dist(opt_row_turns_pmf_10_3, color
;
make_pct_yaxis(opt_ax)"Optimal play");
opt_ax.set_title(
='C1', ax=ts_ax);
plot_turn_dist(ts_row_turns_pmf_10_3, color
;
make_pct_yaxis(ts_ax)"Thompson sampling");
ts_ax.set_title(
"Ship size 3, row length 10");
fig.suptitle(; fig.tight_layout()
It turns out that for a single row with a single ship, the Thompson sampling strategy is optimal!
It is extremely cool to me that we can reproduce the optimal strategy without too much domain-specific thought, but just by enumerating all possible boards and sequentially eliminating the ones that are no longer compatible with the observed (revealed) data! This result is particularly exciting because, as we will see, the generalization of the row-optimal strategy to a two-dimensional grid is not straightforward (even for just one ship), but the Thompson sampling approach generalizes much more readily.
Square, one ship
We now turn to the case of a square grid with one ship on it to see how these strategies generalize to something closer to standard Battleship.
Random guessing
Once again the number of turns it takes to sink the ship follows a negative hypergeometric distribution. For the 5/5/3 configuration, this distribution is shown below.
= get_random_guess_dist(5**2, 3) random_5_5_3_pmf
= plot_turn_dist(random_5_5_3_pmf, kind='line', mean=True)
ax
;
make_pct_yaxis(ax)r"5/5/3"); ax.set_title(
There are not really any surprises here; the geometry of the board and the ability to have ships orient along rows or columns doesn’t affect the random guessing strategy at all.
Near optimal
We now construct a near optimal strategy for one ship on a two-dimensional grid based on trying to reduce the problem to search on a one-dimensional row/column as quickly as possible. We call this strategy “near optimal” because we will see that it takes slightly more turns, on average, than a Thompson sampling strategy to sink the ship.
It is tempting to treat an \(m \times n\) two-dimensional grid as a \(m \cdot n\) one-dimensional grid and use the search grid from the above strategy, but this grid will only minimize the turns-to-first-hit when \(m\) and \(n\) are relatively prime. To illustrate this fact, we show the results of wrapping the search grid for the 100/2 configuration to the 10/10/2 configuration below.
= get_search_cells_1d(100, 2)
bad_search_j_1d = np.divmod(bad_search_j_1d, 10)
bad_search_i, bad_search_j = np.zeros((10, 10))
bad_search_grid = 1 bad_search_grid[bad_search_i, bad_search_j]
= plot_board(bad_search_grid)
ax r"Don't do this!", fontweight='bold'); ax.set_title(
Clearly this search strategy can fail to locate a column-oriented ship in an even-numbered column even after fifty guesses!
Instead, the optimal grid to search for the initial hit (at least for a square grid) is to cycle through the equivalence classes modulo the ship size in each row, as shown below for both the 10/10/2 and the 5/5/3 configurations.
def get_search_cells_2d(grid_shape, ship_size):
= np.indices(grid_shape)
i, j = (i - 1) % ship_size == j % ship_size
in_grid
return i[in_grid], j[in_grid]
= get_search_cells_2d((10, 10), 2)
search_i, search_j = np.zeros((10, 10))
search_grid = 1 search_grid[search_i, search_j]
= plot_board(search_grid)
ax r"Ship size 3, $5 \times 5$ grid"); ax.set_title(
= get_search_cells_2d((5, 5), 3)
search_i, search_j = np.zeros((5, 5))
search_grid = 1 search_grid[search_i, search_j]
= plot_board(search_grid)
ax r"Ship size 3, $5 \times 5$ grid"); ax.set_title(
We can tell that these search grids are optimal because each cell highlighted in red is one cell less than the ship shize away from any other hightlighted cell.
Once a hit has been found by searching these cells, we have to determine if the ship is oriented along the row or column passing through that cell. For hits near a boundary (in cells less than a ship’s length away from either the top/bottom or left/right edge) we only have to test one cell to decide which way the ship is oriented. For hits at the interior we have to, in general, test two adjacent cells (either left and right or above and below) to determine the orientation of the ship.
The following plot highlights the cells that only require one test guess to determine the row/column orientation following a hit for the 10/10/3 configuration.
= np.zeros((10, 10))
near_edge = np.indices((10, 10))
i, j < 2) | (7 < i) | (j < 2) | (7 < j)] = 1 near_edge[(i
; plot_board(near_edge)
The strategy outlined here is implemented below.
class SingleGridStrategy(Strategy):
def __init__(self, ship_size):
self._ship_size = ship_size
def next_guess(self, revealed):
if revealed.mask.all() or revealed.sum() == 0:
= (revealed.mask
next_unmasked
[search_i, search_j]
.argmax())
return search_i[next_unmasked], search_j[next_unmasked]
else:
= revealed.shape
grid_length, _ = argmax_2d(revealed)
hit_i, hit_j = self._ship_size
ship_size
if revealed.sum() == 1:
if hit_i < ship_size - 1 or grid_length - ship_size < hit_i:
= hit_i + (1 if hit_i < ship_size - 1 else -1)
next_i
if revealed.mask[next_i, hit_j]:
return next_i, hit_j
else:
= revealed[next_i, hit_j] == 0
is_row elif hit_j < ship_size - 1 or grid_length - ship_size < hit_j:
= hit_j + (1 if hit_j < ship_size - 1 else -1)
next_j
if revealed.mask[hit_i, next_j]:
return hit_i, next_j
else:
= revealed[hit_i, next_j] == 1
is_row elif revealed.mask[hit_i, hit_j - 1]:
return hit_i, hit_j - 1
elif revealed.mask[hit_i, hit_j + 1]:
return hit_i, hit_j + 1
else:
= False
is_row else:
= revealed[hit_i].sum() > 1
is_row
if is_row:
return hit_i, next_guess_1d_with_hit(revealed[hit_i])
else:
return next_guess_1d_with_hit(revealed[:, hit_j]), hit_j
This strategy involves quite a bit of branching logic and more than a few magic numbers. While I do not doubt that this implementation can be simplified, avoiding complex branching logic is one of the motivations for a simulation-based Bayesian approach to Battleship.
To understand the distribution of turns required by this strategy to sink the ship, we define a function that generates all square grids of a given shape with a single ship of a given size.
def get_all_ships(grid_length, ship_size):
= get_all_ship_rows(grid_length, ship_size)
rows = rows.shape
n_rows, _
= np.arange(grid_length)
i
= np.zeros((
boards
n_rows, grid_length,
grid_length, grid_length,
))= rows[:, np.newaxis]
boards[:, i, i, :] = boards.reshape((-1, grid_length, grid_length))
long_boards
return np.concatenate((
long_boards,0, 2, 1)
long_boards.transpose( ))
We generate all possible boards for the 5/5/3 configuration and visualize a few of them.
= get_all_ships(5, 3) all_ships_5_5_3
= plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True,
fig, axes =(FIG_WIDTH, FIG_WIDTH))
figsize
= rng.choice(all_ships_5_5_3, size=axes.size, replace=False)
plt_boards
for (board, ax) in zip(plt_boards, axes.flat):
=ax);
plot_board(board, ax
"Four boards, 5/5/3");
fig.suptitle(; fig.tight_layout()
We now play all 5/5/3 games using this strategy.
= [
grid_games_5_5_3 3))
play(ship[np.newaxis], SingleGridStrategy(for ship in all_ships_5_5_3
]= Pmf.from_seq([
grid_turns_pmf_5_5_3 for game in grid_games_5_5_3
game.turns ])
= plot_turn_dist(grid_turns_pmf_5_5_3,
ax ='C1', label="Near optimal",
color=True, mean_kwargs={'c': 'C1'});
mean
plot_turn_dist(random_5_5_3_pmf,='line', c='C0',
kind=True, mean_kwargs={'c': 'C0'},
mean="Random",
label=ax)
ax
;
make_pct_yaxis(ax)
; ax.legend()
We see that this near optimal strategy is an significant improvement over random guessing (which is expected). Intuitively, it seems odd that the distribution is bimodal. In fact, we will see after implementing Thompson sampling in the two-dimensional case, that this strategy is definitely not optimal.
Bayesian (Thompson sampling)
The generalization of one-dimensional Thompson sampling to a two-dimensional grid is much simpler than the above generalization of the one-dimensional optimal strategy. The implementation of Thompson sampling for a single ship in two dimensions is shown below.
class SingleThompsonStrategy(Strategy):
def __init__(self, poss_ships):
self._poss_ships = poss_ships
def compat_ships(self, revealed):
if revealed.mask.all():
return self._poss_ships
else:
= is_compat(self._poss_ships, revealed)
is_compat_
return self._poss_ships[is_compat_]
def next_guess(self, revealed):
= np.ma.masked_array(
post self.compat_ships(revealed).mean(axis=0),
=~revealed.mask
mask
)
return argmax_2d(post)
We now play all 5/5/3 games with this strategy to understand its turn distribution.
= [
ts_games_5_5_3
play(ship[np.newaxis], SingleThompsonStrategy(all_ships_5_5_3))for ship in all_ships_5_5_3
]= Pmf.from_seq([
ts_turns_pmf_5_5_3 for game in ts_games_5_5_3
game.turns ])
As alluded to above, we see that the turn distribution for Thompson sampling is concentrated to the left of the turn distribution for our near optimal strategy.
= plot_turn_dist(grid_turns_pmf_5_5_3,
ax ='C1', label="Near optimal",
color=True, mean_kwargs={'c': 'C1'});
mean
plot_turn_dist(random_5_5_3_pmf,='line', c='C0',
kind=True, mean_kwargs={'c': 'C0'},
mean="Random",
label=ax)
ax
plot_turn_dist(ts_turns_pmf_5_5_3,='C2', label="Thompson sampling",
color=True, mean_kwargs={'c': 'C2'},
mean=ax);
ax
;
make_pct_yaxis(ax)
='upper right'); ax.legend(loc
Note that while the turn distribution for Thompson sampling is concentrated further to the left, the expected numbers of turns for Thompson sampling and our near optimal strategy, indicated by the dashed vertical lines, are quite close.
No doubt we could improve the logic of our near optimal strategy to match (or perhaps even beat) Thompson sampling, but the generalization of Thompson sampling to the two-dimensional grid is so much more elegant that it seems hardly worthwhile.
Standard battleship
Now that we understand these approaches to simplified versions of Battleship, we are ready to tackle the Thompson sampling approach to standard Battleship.
Random
From above we recall the turn distribution for random guessing at standard Battleship.
std_turn_ax.figure
Optimistic expected values
Generalizing the optimal one-dimensional strategy to two dimensions was sufficiently difficult that we do not attempt to define even a near optimal strategy for standard Battleship. Configurations where several ships are adjacent, such as the one shown below, will certainly lead to complex and bug-prone branching logic.
= np.zeros((n_ships, GRID_LENGTH, GRID_LENGTH))
ships 0, 0, :5] = 1
ships[1, 1, :4] = 1
ships[2, 2, :3] = 1
ships[3, 3, :3] = 1
ships[4, 4, :2] = 1 ships[
= plt.subplots(ncols=n_ships, sharex=True, sharey=True,
fig, axes =(n_ships * 2.5, 2.5))
figsize
for (ship, name, ax) in zip(ships, SHIP_NAMES, axes):
=ax);
plot_board(ship, ax
;
ax.set_xticklabels([]);
ax.set_yticklabels([]);
ax.set_title(name)
; fig.tight_layout()
; plot_board(to_board(ships))
In spite of these challenges, we will want to at least be able to benchmark the average number of turns Thompson sampling requires against other approaches. Fortunately we can propose several scenarios for which the expected number of turns is straightforward to calculate.
Extremely optimistic
We can make two assumptions to get a reasonable floor on the expected number of turns.
- The first five hits we get are each from a different ship.
- As soon as we hit a ship, we know the complete location of that ship.
Together with a search strategy for the first five hits, we can calculate the expected value in this unrealistic but instructive scenario. Weakening these assumptions also leads to more realistic expected values to which we can compare the performance of Thompson sampling.
A fairly efficient search strategy for the first five hits in standard Battle ship is to search a mod-two grid (since the smallest ship has size two).
= get_search_cells_2d((10, 10), 2)
search_i, search_j = np.zeros((10, 10))
search_grid = 1 search_grid[search_i, search_j]
= plot_board(search_grid)
ax r"Standard Battleship"); ax.set_title(
The distribution of the number of turns it takes to get five hits in these fifty cells is given below.
= get_random_guess_dist(search_grid.sum(),
extreme_hit_pmf / 2).sum(),
np.ceil(SHIP_SIZES =SHIP_SIZES.size) n_hit
= plot_turn_dist(extreme_hit_pmf, mean=True)
ax
;
make_pct_yaxis(ax)"Finding five ships on a\n"
ax.set_title(r"$10 \times 10$ grid (optimistically)");
Under the second assumption, it will take us 12 more guesses to sink all the ships, so the expected number of turns to solve the game (optimistically) is
= extreme_hit_pmf.mean() + (SHIP_SIZES - 1).sum() extreme_ev
extreme_ev
40.18181818181814
Under the slightly more conservative (but still optimistic) assumption that it takes on average one and a half misses to determine the location and orientation of a ship, the expected value becomes
= extreme_hit_pmf.mean() + 1.5 * SHIP_SIZES.size + (SHIP_SIZES - 2).sum() optim_ev
optim_ev
42.68181818181814
We will use these two figures to benchmark the performance of Thompson sampling for standard Battleship in the next section.
Bayesian (Thompson sampling)
Generalizing the Thompson sampling strategy from one ship on a two-dimensional grid to standard Battleship is slighly more conceptually involved than generalize from one- to two-dimensional grids with one ship, but not too much more.
The generalization also requires some computational optimizations, as we can no longer expect to enumerate and store all possible configurations of the board on reasonably sized hardware. To illustrate this and facilitate our eventual solution to this issue, we generate all possible configurations of each ship.
= [get_all_ships(GRID_LENGTH, ship_size) for ship_size in SHIP_SIZES] all_ships
Even though we can enumerate the possibilities for each ship easily,
= np.array([len(ships) for ships in all_ships]) n_ship_grids
n_ship_grids
array([120, 140, 160, 160, 180])
there are more than \(10^{10}\) possible combinations in the cartesian product of these configurations
sum() np.log10(n_ship_grids).
10.88882175214102
Now, not all elements of the Cartesian product produce valid boards (the ships may overlap). An example of such an element is given below.
= np.zeros((n_ships, GRID_LENGTH, GRID_LENGTH))
ships 0, 0, :5] = 1
ships[1, 0, :4] = 1
ships[2, 0, :3] = 1
ships[3, 0, :3] = 1
ships[4, 0, :2] = 1 ships[
= plt.subplots(ncols=n_ships, sharex=True, sharey=True,
fig, axes =(n_ships * 2.5, 2.5))
figsize
for (ship, name, ax) in zip(ships, SHIP_NAMES, axes):
=ax);
plot_board(ship, ax
;
ax.set_xticklabels([]);
ax.set_yticklabels([]);
ax.set_title(name)
; fig.tight_layout()
=5.,
plot_board(to_board(ships), vmax=True,
cbar={
cbar_kwargs'format': '%d',
'label': "Number of ships"
; })
The following function tests to see if an array of ship grids constitutes a valid board (has no overlap).
def has_no_overlap(ships, ship_axis=0, board_axis=(-2, -1)):
return (to_board(ships, ship_axis=ship_axis)
max(axis=board_axis) \
.== 1)
has_no_overlap(ships)
False
Even eliminating sets of ships that do overlap, there are still too many possible boards to enumerate. To address this challenge we will switch from Thompson enumeration, which is a more proper description of what we’ve been doing so far to Thompson sampling. Instead of enumerating all boards compatible with the currently revealed information and choosing the unknown cell with the highest posterior probability of yielding a hit, we will draw a sample from the posterior distribution of compatible boards, calculate the cell most likely to yield a hit based on this sample to use as the next guess.
This sampling approach means we only need to be able to simulate random games of Battleship and check if they are compatible with the currently revealed information in order to generalize Thompson sampling to standard Battleship. Simulating random games of Battleship is not particularly hard, but simulating games compatible with boards that have many spots revealed can take a prohibitively long time if done naively by sampling from each ship’s possible configurations and then testing to see if the product of the samples is compatible. (I have tried this and had it take over an hour to generate a compatible sample with approximately 30 cells revealed.)
Fortunately we can address this issue by intelligently propagating hits and misses from the aggregate board to each ship’s grid and restricting to a subset of each ship’s position that is more likely to produce a compatible board when combined with all the other ship samples.
Before walking through this process, we introduce the machinery necessary to sample random (compatible) boards.
def _sample_compat_ships_block(poss_ships, revealed,
=None, block_size=10_000):
rngif rng is None:
= np.random.default_rng()
rng
= np.stack([
samples =block_size, shuffle=False)
rng.choice(ships, sizefor ships in poss_ships
],=1
axis
)
= has_no_overlap(samples, ship_axis=1)
has_no_overlap_ = samples[has_no_overlap_]
valid_samples
if revealed.mask.all():
return valid_samples
else:
= is_compat(to_board(valid_samples, ship_axis=1), revealed)
is_compat_
return valid_samples[is_compat_]
def sample_compat_ships_block(poss_ships, revealed,
=None, block_size=10_000):
rng= []
block
while len(block) == 0:
= _sample_compat_ships_block(
block
poss_ships, revealed,=rng, block_size=block_size
rng
)
return block
def generate_compat_ships(poss_ships, revealed,
=None, block_size=10_000):
rngwhile True:
yield from sample_compat_ships_block(
poss_ships, revealed,=rng, block_size=block_size
rng )
def take(n, gen, progress_bar=False):
= trange(n) if progress_bar else range(n)
n_range
return [x for _, x in zip(n_range, gen)]
def sample_compat_ships(poss_ships, revealed, n,
=None, block_size=10_000,
rng=False):
progress_bar= generate_compat_ships(poss_ships, revealed,
compat_gen =rng)
rng
return np.array(take(n, compat_gen,
=progress_bar))
progress_bar
def sample_ships(poss_ships, n,
=None, block_size=10_000,
rng=False):
progress_bar= np.ma.masked_all_like(all_ships[0][0])
empty
return sample_compat_ships(poss_ships, empty, n,
=rng, block_size=block_size,
rng=progress_bar) progress_bar
We plot four randomly sampled boards below.
= sample_ships(all_ships, 4, rng=rng) ships
= plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True,
fig, axes =(FIG_WIDTH, FIG_WIDTH))
figsize
for (ships_, ax) in zip(ships, axes.flat):
=ax);
plot_board(to_board(ships_), ax
"Four boards, standard Battleship");
fig.suptitle(; fig.tight_layout()
To illustrate how long it can take to sample compatible boards from the full Cartesian product, we randomly mask one third of the cells in the first board above and attempt to sample a compatible board.
= rng.choice(
masked_i, masked_j 2, -1),
np.indices((GRID_LENGTH, GRID_LENGTH)).reshape(=GRID_LENGTH**2 // 3, axis=1,
size=False, shuffle=False
replace
)= np.full((GRID_LENGTH, GRID_LENGTH), False)
mask = True
mask[masked_i, masked_j]
= np.ma.masked_array(to_board(ships[0]), mask=mask) revealed
; plot_board(revealed)
= datetime.datetime.now()
start
try:
1, rng=rng)
sample_compat_ships(all_ships, revealed, except KeyboardInterrupt:
= datetime.datetime.now()
end
print("You gave up trying to sample a compatible board"
f"after {(end - start).total_seconds():.1f} seconds")
You gave up trying to sample a compatible boardafter 59.4 seconds
In order to implement the strategy for sampling compatible boards outlined above, we need to think about how to propagate hit/miss information about the combined board to each ship. For a miss, this propagation is easy, as all ships must have been missed. Propagating the information from a hit is much trickier, since in general we don’t know which ship was hit. We do however, know which ship was hit when our opponent announces that we sank one of their ships, because they have to tell us which one.
To implement Thompson sampling for standard Battleship then, we track not just what has been revealed, but what we can conclude has been revealed per ship. We use this per-ship revealed information to reduce our possibilities the compatible ship grids before sampling, then sample from the Cartesian product of these reduced sets, which produces compatible samples in at most seconds, even when many cells have been revealed.
Of course there are many situations where we can infer which ship has been hit. Consider the following board.
= np.ma.masked_all((10, 10))
revealed 0, :2] = 0
revealed[1, 2] = 0
revealed[2, :2] = 0 revealed[
; plot_board(revealed)
If guessing (1, 0) results in a hit, we can obviously conclude that the patrol boat was hit. We do not account for situations such as this when propagating hit information from the combined board to individual ships to avoid cumbersome and error-prone conditional logic in our strategy. (In fact, Thompson sampling accounts for this automatically without a special rule, as there will be only one possible position for the patrol boat compatible with this revealed information.)
We implement Thompson sampling for standard Battleship as described above.
class ThompsonStrategy(Strategy):
def __init__(self, all_ships, rng=None, block_size=10_000):
self._ship_strats = [
for ships in all_ships
SingleThompsonStrategy(ships)
]
= len(all_ships)
n_ship = all_ships[0][0].shape
grid_shape self._ships_revealed = np.ma.masked_all((n_ship,) + grid_shape)
self._rng = rng if rng is not None else np.random.default_rng()
self._block_size = block_size
def next_guess(self, revealed, n=None):
= np.ma.masked_array(
post self.sample_post(revealed, n=n),
=~revealed.mask
mask
)
return argmax_2d(post)
def sample_post(self, revealed, n=None):
= [
all_compat_ships
ship_strat.compat_ships(ship_revealed)for (ship_strat, ship_revealed)
in zip(self._ship_strats, self._ships_revealed)
]
if n is None:
= sample_compat_ships_block(
compat_samples
all_compat_ships, revealed,=self._rng, block_size=self._block_size
rng
)else:
= sample_compat_ships(
compat_samples =n,
all_compat_ships, revealed, n=self._rng, block_size=self._block_size
rng
)
return to_board(compat_samples, ship_axis=1).mean(axis=0)
def reveal(self, i, j, hit_or_miss, sunk):
if hit_or_miss == 0 or sunk is not None:
self._ships_revealed[:, i, j] = 0
if sunk is not None:
self._ships_revealed[sunk, i, j] = 1
We use this strategy to play a game and visualize the results.
= ThompsonStrategy(all_ships, rng=rng)
strat = play(ships[0], strat,
game =True) progress_bar
100%|██████████| 17/17 [00:02<00:00, 7.06it/s]
game.turns
46
%%capture
= animate_boards(game.turn_revealed) ani
HTML(ani.to_html5_video())
With a bit of extra bookeeping, we can also watch the posterior distribution used for Thompson sampling evolve after each turn.
= Battleship(ships[0])
game = ThompsonStrategy(all_ships, rng=rng)
strat
= []
posts
with tqdm(total=SHIP_SIZES.sum()) as pbar:
while not game.is_solved:
= strat.sample_post(game.revealed, n=1_000)
post = argmax_2d(np.ma.masked_array(
i, j =~game.revealed.mask
post, mask
))
= game.guess(i, j)
hit_or_miss, sunk
strat.reveal(i, j, hit_or_miss, sunk)
posts.append(post)
if hit_or_miss == 1:
pbar.update()
100%|██████████| 17/17 [00:09<00:00, 1.83it/s]
%%capture
= animate_boards(posts, cbar=True) ani
HTML(ani.to_html5_video())
I’m not really sure how interpretable this is, but it is kind of mesmerizing.
Finally, we play 1,000 random games of standard Battleship using this strategy to understand the distributions of the number of turns required.
= 1_000 N_GAME
def pickleable_play(args):
= args
i, ships
return play(
ships,
ThompsonStrategy(all_ships,=np.random.default_rng(SEED + i))
rng
)
with mp.Pool(mp.cpu_count()) as pool:
= [game for game in tqdm(
ts_games
pool.imap_unordered(
pickleable_play,zip(range(N_GAME),
=rng))
sample_ships(all_ships, N_GAME, rng
),=N_GAME
total )]
100%|██████████| 1000/1000 [15:32<00:00, 1.07it/s]
= np.array([game.turns for game in ts_games])
ts_turns = Pmf.from_seq(ts_turns) ts_turns_pmf
'./ts_turns_pmf.csv') ts_turns_pmf.to_csv(
= plot_turn_dist(std_random_pmf, kind='line', mean=True,
ax ="Random guessing")
label='C1',
plot_turn_dist(ts_turns_pmf, color=True, mean_kwargs={'c': 'C1'},
mean="Thompson sampling", ax=ax);
label='C2', ls='--',
ax.axvline(extreme_ev, c="Extremely optimistic\nexpected value");
label='C3', ls='--',
ax.axvline(optim_ev, c="Optimistic\nexpected value");
label
=5, offset=-2));
ax.xaxis.set_major_locator(IndexLocator(base
0, 0.0825);
ax.set_ylim(;
make_pct_yaxis(ax)
='upper right', bbox_to_anchor=(0.825, 1));
ax.legend(loc"Standard Battleship"); ax.set_title(
print(f"""
Extremely optimistic expected value = {extreme_ev:.2f}
Optimistic expected value = {optim_ev:.2f}
Thompson sampling expected value = {ts_turns_pmf.mean():.2f}
""")
Extremely optimistic expected value = 40.18 Optimistic expected value = 42.68 Thompson sampling expected value = 45.89
It’s very exciting to see that Thompson sampling is not too much less efficient (on average) than our (extremely) optimistic baseline expected values!
For fun, we visualize the simulated games with the fewest and most turns, to see what it looks like when we get very lucky or unlucky.
= ts_games[(ts_turns == ts_turns.min()).argmax()]
min_ts_game = ts_games[(ts_turns == ts_turns.max()).argmax()] max_ts_game
%%capture
= animate_boards(min_ts_game.turn_revealed)
min_ani = animate_boards(max_ts_game.turn_revealed) max_ani
HTML(min_ani.to_html5_video())
HTML(max_ani.to_html5_video())
Interestingly, we see that the fastest game three ships adjacent to each other. We likely got a bit of a speedup from finding two of those while attempting to sink the first one hit.
These simulated games naturally invite the question of whether Thompson sampling is optimal for standard Battleship (in the appropriate sense) as we have implemented it. I strongly suspect that it is near-optimal, but not quite optimal. Likely Thompson sampling combined with a few simple rules (like a restricted initial search grid) will produce slightly better (and perhaps even optimal) results. Searching the internet for prior work on near optimal Battleship strategies yields a few articles about strategies that take, on average, 42-44 guesses to complete the game. Our simple Thompson sampling approach is quite close to that range, and with a few heuristics bolted on would likely be able to shave off a few turns to reach that level of performance. We can also improve the performance of Thompson sampling at the cost of computational time by increasing the block_size
used for sampling, resulting in a more accurate approximation to the posterior distribution. Another interesting feature of the strategies in those articles is that they try to balance searching (exploration) and sinking known ships (exploitation) with rules, which we do not have to do explicitly when using Thompson sampling.
Two other interesting extensions of this work would be to change the process of randomly generating ships (which is equivalent to changing the prior used in ABC) and to explore which strategies for initially placing ships take the longest for Thompson sampling to solve.
This framework is generalizable to variations of Battleship such as salvo or or limited lying is allowed, and we may explore those generalizations in future posts.
Many thanks to Kiril Zvezdarov and Meenal Jhajharia for providing helpful feedback on early drafts of this post.
This post is available as a Jupyter notebook here.