Linear Algebra Solves a Childhood Mystery

When I was growing up in Moldova, we didn't have the Game Boy or other fancy gaming consoles. Instead we had the Brick Game, which cost about $5 and came with Tetris and a few other simple games.

One of the games worked as follows: You start with a 5x5 board on which some cells are "on" and some cells are "off". The goal is to clear the board by setting all cells to "off". You can toggle any cell, but doing so also toggles the four adjacent cells (above, below, left, and right), assuming those cells do not go over the edge of the board.

Here is an example of clearing a board in four moves. The \(\blacksquare\) symbol indicates a cell that is "on", and the \(\fbox{$\phantom{\blacksquare}$}\) symbol indicates which cell was toggled in each step.

$$ \begin{bmatrix} & & \blacksquare & & \\ & \blacksquare & \blacksquare & & \\ \blacksquare & \blacksquare & & \blacksquare & \\ & & \blacksquare & \fbox{$\blacksquare$} & \blacksquare \\ & & & \blacksquare & \end{bmatrix} \rightarrow \begin{bmatrix} & & \blacksquare & & \\ \fbox{$\phantom{\blacksquare}$} & \blacksquare & \blacksquare & & \\ \blacksquare & \blacksquare & & & \\ & & & & \phantom{\blacksquare} \\ & & & \phantom{\blacksquare} & \phantom{\blacksquare} \end{bmatrix} \rightarrow \begin{bmatrix} \blacksquare & & \blacksquare & & \\ \blacksquare & \fbox{$\phantom{\blacksquare}$} & \blacksquare & & \\ & \blacksquare & & & \\ & & & & \phantom{\blacksquare} \\ & & & \phantom{\blacksquare} & \phantom{\blacksquare} \end{bmatrix} $$

$$ \rightarrow \begin{bmatrix} \blacksquare & \fbox{$\blacksquare$} & \blacksquare & & \\ & \blacksquare & & & \\ & & & & \phantom{\blacksquare} \\ & & & & \phantom{\blacksquare} \\ & & & \phantom{\blacksquare} & \phantom{\blacksquare} \end{bmatrix} \rightarrow \begin{bmatrix} \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} \\ \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} \\ \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} \\ \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} \\ \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} \end{bmatrix} $$

As the game progressed, the boards got more and more difficult, until some board seemed completely impossible to solve, no matter how much time I spent on it. I couldn't figure out, back then, if such a board was genuinely unsolvable, or if I was just too bad of a player to solve it.

Then in 2016, I went through David Poole's textbook on linear algebra to prepare for some interviews. I found the section on finite linear games, which examines a similar game with five lights and five switches. Poole shows that to get from one state of the lights to another, all you have to do is solve a system of linear equations. This made me think of the original 5x5 game I played as a kid, and I wondered what questions I'd be able to answer about it using linear algebra. After much procrastination, this blog post summarizes the results.

 

Problem setup

Each board can be represented as a vector in the finite field \(\mathbb{Z}_2^{25}\), i.e. a 25-dimensional vector where each entry is 0 or 1. So the starting board in the example above can be represented as the following vector:

$$\textbf{b}_1 = [0 0 1 0 0 | 0 1 1 0 0 | 1 1 0 1 0 | 0 0 1 1 1 | 0 0 0 1 0].$$

(I've added some \(|\) symbols to make it easier to see the 5 blocks of 5 cells.)

Each action can also be represented as a vector in the same space. So the first action taken in the example above can be represented as the following vector:

$$\textbf{a}_{19} = [0 0 0 0 0 | 0 0 0 0 0 | 0 0 0 1 0 | 0 0 1 1 1 | 0 0 0 1 0].$$

(I'm calling it \(\textbf{a}_{19}\) because it is the action of toggling the 19th cell.)

Applying an action to a board means adding the action vector to the board vector. Note that in \(\mathbb{Z}_2\), addition is modulo 2, i.e. 1 + 1 = 0. So we get the second board in the example above as \(\textbf{b}_2 = \textbf{b}_1 + \textbf{a}_{19}\).

To solve an initial board \(\textbf{b}\), we need to find a sequence of actions that, when added to \(\textbf{b}\), produce an empty board, for example:

$$\textbf{b} + \textbf{a}_{15} + \textbf{a}_8 + \textbf{a}_{21} + ... = \textbf{0}.$$

Since addition is commutative, the order in which we apply the actions does not matter. Since we are doing all additions modulo 2, applying the same action more than once is a no-op. This means that each possible action needs to be applied at most once. Let \(x_i\) denote whether or not action \(\textbf{a}_i\) was applied. Then we need to find a solution to:

$$\textbf{b} + x_1 \textbf{a}_1 + x_2 \textbf{a}_2 + x_3 \textbf{a}_3 + ... = \textbf{0}.$$

Since -1 = 1 in \(\mathbb{Z}_2\), we can rewrite this as:

$$M \textbf{x} = \textbf{b},$$

where \(M\) is the matrix of actions, in which column \(i\) represents \(\textbf{a}_i\), the action of toggling cell \(i\):

$$M = \left[\begin{myarray} 1 & 1 & & & & 1 & & & & & & & & & & & & & & & & & & & \\ 1 & 1 & 1 & & & & 1 & & & & & & & & & & & & & & & & & & \\ & 1 & 1 & 1 & & & & 1 & & & & & & & & & & & & & & & & & \\ & & 1 & 1 & 1 & & & & 1 & & & & & & & & & & & & & & & & \\ & & & 1 & 1 & & & & & 1 & & & & & & & & & & & & & & & \\ 1 & & & & & 1 & 1 & & & & 1 & & & & & & & & & & & & & & \\ & 1 & & & & 1 & 1 & 1 & & & & 1 & & & & & & & & & & & & & \\ & & 1 & & & & 1 & 1 & 1 & & & & 1 & & & & & & & & & & & & \\ & & & 1 & & & & 1 & 1 & 1 & & & & 1 & & & & & & & & & & & \\ & & & & 1 & & & & 1 & 1 & & & & & 1 & & & & & & & & & & \\ & & & & & 1 & & & & & 1 & 1 & & & & 1 & & & & & & & & & \\ & & & & & & 1 & & & & 1 & 1 & 1 & & & & 1 & & & & & & & & \\ & & & & & & & 1 & & & & 1 & 1 & 1 & & & & 1 & & & & & & & \\ & & & & & & & & 1 & & & & 1 & 1 & 1 & & & & 1 & & & & & & \\ & & & & & & & & & 1 & & & & 1 & 1 & & & & & 1 & & & & & \\ & & & & & & & & & & 1 & & & & & 1 & 1 & & & & 1 & & & & \\ & & & & & & & & & & & 1 & & & & 1 & 1 & 1 & & & & 1 & & & \\ & & & & & & & & & & & & 1 & & & & 1 & 1 & 1 & & & & 1 & & \\ & & & & & & & & & & & & & 1 & & & & 1 & 1 & 1 & & & & 1 & \\ & & & & & & & & & & & & & & 1 & & & & 1 & 1 & & & & & 1 \\ & & & & & & & & & & & & & & & 1 & & & & & 1 & 1 & & & \\ & & & & & & & & & & & & & & & & 1 & & & & 1 & 1 & 1 & & \\ & & & & & & & & & & & & & & & & & 1 & & & & 1 & 1 & 1 & \\ & & & & & & & & & & & & & & & & & & 1 & & & & 1 & 1 & 1 \\ & & & & & & & & & & & & & & & & & & & 1 & & & & 1 & 1 \end{myarray}\right] .$$

(I've omitted the zero entries to make it easier to see the structure of the matrix.)

We are now ready to answer some interesting questions about this game.

 

Are all boards solvable?

All possible boards are solvable if and only if \(M \textbf{x} = \textbf{b}\) has a solution for all \(\textbf{b}\), which is true if and only if \(M\) has full rank. To find the rank of \(M\), we convert it to its reduced row echelon form:

$$R = \left[\begin{myarray} 1 & & & & & & & & & & & & & & & & & & & & & & & & 1 \\ & 1 & & & & & & & & & & & & & & & & & & & & & & 1 & \\ & & 1 & & & & & & & & & & & & & & & & & & & & & 1 & 1 \\ & & & 1 & & & & & & & & & & & & & & & & & & & & 1 & \\ & & & & 1 & & & & & & & & & & & & & & & & & & & & 1 \\ & & & & & 1 & & & & & & & & & & & & & & & & & & 1 & 1 \\ & & & & & & 1 & & & & & & & & & & & & & & & & & & \\ & & & & & & & 1 & & & & & & & & & & & & & & & & 1 & 1 \\ & & & & & & & & 1 & & & & & & & & & & & & & & & & \\ & & & & & & & & & 1 & & & & & & & & & & & & & & 1 & 1 \\ & & & & & & & & & & 1 & & & & & & & & & & & & & 1 & \\ & & & & & & & & & & & 1 & & & & & & & & & & & & 1 & \\ & & & & & & & & & & & & 1 & & & & & & & & & & & & \\ & & & & & & & & & & & & & 1 & & & & & & & & & & 1 & \\ & & & & & & & & & & & & & & 1 & & & & & & & & & 1 & \\ & & & & & & & & & & & & & & & 1 & & & & & & & & 1 & 1 \\ & & & & & & & & & & & & & & & & 1 & & & & & & & & \\ & & & & & & & & & & & & & & & & & 1 & & & & & & 1 & 1 \\ & & & & & & & & & & & & & & & & & & 1 & & & & & & \\ & & & & & & & & & & & & & & & & & & & 1 & & & & 1 & 1 \\ & & & & & & & & & & & & & & & & & & & & 1 & & & & 1 \\ & & & & & & & & & & & & & & & & & & & & & 1 & & 1 & \\ & & & & & & & & & & & & & & & & & & & & & & 1 & 1 & 1 \\ & & & & & & & & & & & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & & & & & & & & & & & \\ \end{myarray}\right] .$$

We find that \(R\) has two zero rows, so \(M\) has rank 23. This means there are unsolvable boards!

In fact, since we are in a finite field, we can count how many boards are solvable. A board is solvable if and only if it is in the column space of \(M\). The fact that \(M\) is symmetric and \(R\) has two zero rows means that \(M\) has two columns that are linearly dependent on other columns. After dropping those two columns, we are left with 23 linearly independent columns, i.e. 23 actions that cannot be obtained by combinations of other actions. Each of these actions can be applied 1 or 0 times, therefore we have \(2^{23}\) solvable boards.

Since there are \(2^{25}\) total possible boards, this means that only 1 in 4 boards is solvable. In other words, if the Brick Game generated a board by randomly sampling each cell, then that board would have a 75% chance of being unsolvable! The correct way to generate solvable boards is to start with an empty board, and randomly sample whether to apply each action 1 or 0 times.

(I still suspect that the Brick Game generated only solvable boards, and I was just too bad of a player to solve the more difficult ones. I don't have one of these devices here in the US, but if you have one and want to test this hypothesis, let me know!)

 

How to tell if a board is solvable? (attempt #1)

To solve a board \(\textbf{b}\), we have to find a solution to \(M \textbf{x} = \textbf{b}\), which we can do by Gaussian elimination. But if we only wanted to tell whether a board is solvable (without finding an actual solution), is there an easier way?

Let us call the set of all solvable boards \(W\). We have already determined that \(W = \text{col}(M)\). Since \(W\) is closed under addition and scalar multiplication, it is a subspace of \(\mathbb{Z}_2^{25}\). What if we tried to find its orthogonal complement, \(W^\perp\)? Then we could uniquely represent any board as \(\textbf{b} = \textbf{w} + \textbf{w}^\perp\), where \(\textbf{w} \in W\) and \(\textbf{w}^\perp \in W^\perp\). And if \(\textbf{w}^\perp\) is non-zero, then the board should be unsolvable.

Using the fundamental theorem of linear algebra, we get that \(W^\perp = \text{null}(M^\top)\), which is the same as \(\text{null}(M)\) since \(M\) is symmetric. We can find a basis for \(\text{null}(M)\) by manipulating the reduced row echelon form of \(M\). We find that \(\text{null}(M)\) is spanned by two vectors:

$$\textbf{n}_1 = [0 1 1 1 0 | 1 0 1 0 1 | 1 1 0 1 1 | 1 0 1 0 1 | 0 1 1 1 0],$$ $$\textbf{n}_2 = [1 0 1 0 1 | 1 0 1 0 1 | 0 0 0 0 0 | 1 0 1 0 1 | 1 0 1 0 1].$$

Here we encounter an apparent contradiction: \(\textbf{n}_1\) is clearly solvable:

$$\textbf{n}_1 = \begin{bmatrix} & \blacksquare & \fbox{$\blacksquare$} & \blacksquare & \\ \blacksquare & & \blacksquare & & \blacksquare \\ \fbox{$\blacksquare$} & \blacksquare & & \blacksquare & \fbox{$\blacksquare$} \\ \blacksquare & & \blacksquare & & \blacksquare \\ & \blacksquare & \fbox{$\blacksquare$} & \blacksquare & \end{bmatrix}.$$

Where did we go wrong? It turns out that the concept of orthogonality doesn't really work in \(\mathbb{Z}_2\). Two vectors are orthogonal if and only if their dot product is zero. In \(\mathbb{R}^n\), the only vector that is orthogonal to itself is the zero vector. This means that for a subspace \(V\) of \(\mathbb{R}^n\), we have \(V \cap V^\perp = \{\textbf{0}\}\), which enables the unique decomposition mentioned earlier. But in \(\mathbb{Z}_2^n\), any vector that has an even number of 1s is orthogonal to itself, for example \([1 0 1 0 0]\). If such a vector is in some subspace \(U\) of \(\mathbb{Z}_2^n\), then it must also be in \(U^\perp\), so \(U \cap U^\perp \neq \{\textbf{0}\}\), and we lose the unique decomposition property.

 

How to tell if a board is solvable? (attempt #2)

Since the fancy fundamental theorem didn't help us here, let's go back to the basics. We know that a board \(\textbf{b}\) is solvable if and only if it is in \(\text{col}(M)\), which is the same as \(\text{row}(M)\) since \(M\) is symmetric. Converting a matrix to its reduced row echelon form does not change its row space. This means that a board is solvable if and only if it is in \(\text{row}(R)\), which is the same as \(\text{col}(R^\top)\). In other words, a board is solvable if and only if \(R^\top \textbf{y} = \textbf{b}\) has a solution. Let's take a closer look at \(R^\top\):

$$R^\top = \left[\begin{myarray} 1 & & & & & & & & & & & & & & & & & & & & & & & & \\ & 1 & & & & & & & & & & & & & & & & & & & & & & & \\ & & 1 & & & & & & & & & & & & & & & & & & & & & & \\ & & & 1 & & & & & & & & & & & & & & & & & & & & & \\ & & & & 1 & & & & & & & & & & & & & & & & & & & & \\ & & & & & 1 & & & & & & & & & & & & & & & & & & & \\ & & & & & & 1 & & & & & & & & & & & & & & & & & & \\ & & & & & & & 1 & & & & & & & & & & & & & & & & & \\ & & & & & & & & 1 & & & & & & & & & & & & & & & & \\ & & & & & & & & & 1 & & & & & & & & & & & & & & & \\ & & & & & & & & & & 1 & & & & & & & & & & & & & & \\ & & & & & & & & & & & 1 & & & & & & & & & & & & & \\ & & & & & & & & & & & & 1 & & & & & & & & & & & & \\ & & & & & & & & & & & & & 1 & & & & & & & & & & & \\ & & & & & & & & & & & & & & 1 & & & & & & & & & & \\ & & & & & & & & & & & & & & & 1 & & & & & & & & & \\ & & & & & & & & & & & & & & & & 1 & & & & & & & & \\ & & & & & & & & & & & & & & & & & 1 & & & & & & & \\ & & & & & & & & & & & & & & & & & & 1 & & & & & & \\ & & & & & & & & & & & & & & & & & & & 1 & & & & & \\ & & & & & & & & & & & & & & & & & & & & 1 & & & & \\ & & & & & & & & & & & & & & & & & & & & & 1 & & & \\ & & & & & & & & & & & & & & & & & & & & & & 1 & & \\ & 1 & 1 & 1 & & 1 & & 1 & & 1 & 1 & 1 & & 1 & 1 & 1 & & 1 & & 1 & & 1 & 1 & & \\ 1 & & 1 & & 1 & 1 & & 1 & & 1 & & & & & & 1 & & 1 & & 1 & 1 & & 1 & & \\ \end{myarray}\right] .$$

Since the last two columns of \(R^\top\) are zeros, the entries \(y_{24}\) and \(y_{25}\) can take any values. And since the first 23x23 block of \(R^\top\) is an identity matrix, we can read the first 23 entries of \(\textbf{y}\) directly from \(\textbf{b}\):

$$y_1 = b_1,~ y_2 = b_2,~ ...,~ y_{23} = b_{23}.$$

The last two rows of \(R^\top\) determine whether \(\textbf{b}\) is solvable or not. In particular, \(\textbf{b}\) is solvable if and only if both of the following conditions hold:

$$\begin{align*} b_{24} =~ & b_2 + b_3 + b_4 + b_6 + b_8 + b_{10} + b_{11} + b_{12} + \\ & b_{14} + b_{15} + b_{16} + b_{18} + b_{20} + b_{22} + b_{23}, \\ b_{25} =~ & b_1 + b_3 + b_5 + b_6 + b_8 + b_{10} + b_{16} + b_{18} + b_{20} + b_{21} + b_{23}. \end{align*}$$

Let's simplify them a bit:

$$\begin{align*} b_1 + b_2 + b_4 + b_5 + b_{11} + b_{12} + b_{14} + b_{15} + b_{21} + b_{22} + b_{24} + b_{25} &= 0 \\ b_1 + b_3 + b_5 + b_6 + b_8 + b_{10} + b_{16} + b_{18} + b_{20} + b_{21} + b_{23} + b_{25} &= 0 \end{align*}$$

We can visualize these two conditions as follows:

$$\textbf{c}_1 = \begin{bmatrix} + & + & & + & + \\ & & & & \\ + & + & & + & + \\ & & & & \\ + & + & \phantom{+} & + & + \end{bmatrix},~~~ \textbf{c}_2 = \begin{bmatrix} + & & + & & + \\ + & & + & & + \\ & & & & \\ + & & + & & + \\ + & \phantom{+} & + & \phantom{+} & + \end{bmatrix}.$$

For a board to be solvable, the cells marked with \(+\) must add up to zero. In other words, the locations marked with \(+\) must have an even number of "on" cells, for both \(\textbf{c}_1\) and \(\textbf{c}_2\). This gives us a way to tell whether a board is solvable, without even needing to use pen and paper!

You can think of \(\textbf{c}_1\) and \(\textbf{c}_2\) as checksums, or parity codes. Interestingly, neither of them checks the value of cells \(b_7\), \(b_9\), \(b_{13}\), \(b_{17}\), or \(b_{19}\), which means that a board is solvable regardless of its values in those cells.

Note that although \(\textbf{y}\) can be read off from \(\textbf{b}\), it is not a solution to the original system \(M \textbf{x} = \textbf{b}\). The columns of \(R^\top\) are linear combinations of the columns of \(M\), so we have to do more work to find an actual solution for a given board.

 

How to solve a board?

As mentioned earlier, we can find a solution to \(M \textbf{x} = \textbf{b}\) by Gaussian elimination. But is there an easier way? In particular, is there a linear operation that would map every board to one of its solutions?

Because \(M\) is not full rank, and because we are in \(\mathbb{Z}_2\), the usual methods for computing a (pseudo)inverse don't work.

Let's take another look at the process by which we converted \(M\) to its reduced row echelon form \(R\). Each row of \(R\) is a linear combination of rows of \(M\). By keeping track of which rows we added together to obtain \(R\), we can express \(R = K M\), where \(K\) is another 25x25 matrix in \(\mathbb{Z}_2\).

Then, if we left-multiply \(M \textbf{x} = \textbf{b}\) by \(K\), we get \(R \textbf{x} = K \textbf{b}\). Let \(\textbf{d} = K \textbf{b}\), and let \(\textbf{v}_1\) and \(\textbf{v}_2\) denote the last two columns of \(R\). Then \(R \textbf{x} = \textbf{d}\) has the following form:

$$\begin{bmatrix} I_{23} & \textbf{v}_1 & \textbf{v}_2 \\ \textbf{0} & 0 & 0 \\ \textbf{0} & 0 & 0 \end{bmatrix} \begin{bmatrix} \textbf{x}_{1:23} \\ x_{24} \\ x_{25} \end{bmatrix} = \begin{bmatrix} \textbf{d}_{1:23} \\ d_{24} \\ d_{25} \end{bmatrix}.$$

From here, we can immediately tell that if \(d_{24}\) or \(d_{25}\) are non-zero, then the board is unsolvable. If it is solvable, then one possible solution is:

$$ x_1 = d_1,~ x_2 = d_2,~ ...,~ x_{23} = d_{23},~ x_{24} = 0,~ x_{25} = 0. $$

Thus, the matrix \(K\) gives us what we wanted. For any board \(\textbf{b}\), we can look at the last two entries of \(K \textbf{b}\). If those entries are zeros, then the board is solvable, and we can read off the solution from the first 23 entries of \(K \textbf{b}\).

Note that a solution obtained this way might be "unnatural," because it never uses the 24th or 25th action (\(x_{24} = x_{25} = 0\)). For example, the following board is solvable by a single application of the 25th action:

$$\begin{bmatrix} \phantom{\blacksquare} & \phantom{\blacksquare} & \phantom{\blacksquare} & & \\ & & & & \\ & & & & \\ & & & & \blacksquare \\ & & & \blacksquare & \fbox{$\blacksquare$} \end{bmatrix},$$

but the method above finds a much more circuitous solution:

$$\begin{bmatrix} \fbox{$\phantom{\blacksquare}$} & \phantom{\blacksquare} & \fbox{$\phantom{\blacksquare}$} & \phantom{\blacksquare} & \fbox{$\phantom{\blacksquare}$} \\ \fbox{$\phantom{\blacksquare}$} & & \fbox{$\phantom{\blacksquare}$} & & \fbox{$\phantom{\blacksquare}$} \\ & & & & \\ \fbox{$\phantom{\blacksquare}$} & & \fbox{$\phantom{\blacksquare}$} & & \fbox{$\blacksquare$} \\ \fbox{$\phantom{\blacksquare}$} & & \fbox{$\phantom{\blacksquare}$} & \blacksquare & \blacksquare \end{bmatrix}.$$

Furthermore, the matrix \(K\) that row-reduces \(M\) is rather ugly:

$$K = \left[\begin{myarray} & & & & & 1 & & & & & 1 & 1 & & & & 1 & & 1 & & & & 1 & 1 & 1 & \\ & & & & & & 1 & & & & 1 & 1 & 1 & & & & & & 1 & & 1 & 1 & & 1 & 1 \\ & 1 & 1 & & & 1 & & 1 & 1 & & 1 & 1 & 1 & & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & & 1 & 1 \\ & & 1 & 1 & 1 & & 1 & & & & 1 & 1 & & 1 & 1 & & 1 & & & & & & 1 & 1 & 1 \\ & 1 & 1 & & 1 & 1 & & & & & 1 & & 1 & & 1 & & & 1 & & 1 & 1 & 1 & & & \\ & 1 & & 1 & 1 & 1 & 1 & & & & 1 & 1 & 1 & 1 & 1 & 1 & & 1 & 1 & & & 1 & 1 & 1 & \\ & 1 & & 1 & & 1 & 1 & & 1 & 1 & & & & 1 & & 1 & 1 & 1 & & & & 1 & & & \\ & & & & 1 & & & & 1 & 1 & & & & & 1 & & & & & & & & & & 1 \\ & 1 & & 1 & & 1 & 1 & & 1 & 1 & & 1 & & & & & & 1 & 1 & 1 & & & & 1 & \\ & 1 & & 1 & 1 & 1 & 1 & & & & & 1 & 1 & 1 & & & 1 & 1 & & 1 & 1 & 1 & 1 & 1 & 1 \\ & & & & 1 & & & & 1 & 1 & & & 1 & & 1 & 1 & 1 & 1 & 1 & & & 1 & & & \\ & & & & & & & & & & & & & & & & 1 & & & & 1 & 1 & 1 & & \\ & 1 & 1 & & 1 & 1 & & & & 1 & 1 & & 1 & 1 & & & & 1 & & & 1 & 1 & & & \\ & & 1 & 1 & 1 & & 1 & & 1 & & 1 & 1 & 1 & & & & & & 1 & & 1 & 1 & & 1 & 1 \\ & 1 & 1 & & & 1 & & & 1 & & 1 & & & 1 & 1 & & 1 & 1 & 1 & 1 & & & 1 & & 1 \\ & 1 & & 1 & & 1 & 1 & & 1 & 1 & & 1 & & 1 & & & & & & & 1 & 1 & & 1 & \\ & & 1 & 1 & & & 1 & & & 1 & 1 & 1 & & & 1 & & 1 & 1 & 1 & & & & 1 & & \\ & 1 & & 1 & 1 & 1 & 1 & & & & & 1 & 1 & 1 & 1 & & 1 & 1 & 1 & & 1 & 1 & 1 & 1 & \\ & 1 & 1 & & & 1 & & & 1 & & 1 & & & 1 & 1 & & 1 & 1 & 1 & & & & 1 & & \\ & & & & & & & & & & & & & & & & & & & & & & & & 1 \\ & 1 & 1 & & 1 & 1 & & & & 1 & 1 & & 1 & 1 & & & & & & & 1 & & 1 & 1 & \\ & & 1 & 1 & 1 & & 1 & & 1 & & 1 & 1 & 1 & & & & & & & & 1 & 1 & 1 & & \\ & 1 & 1 & & & 1 & & & 1 & & 1 & & & 1 & 1 & & 1 & 1 & 1 & & & & 1 & 1 & \\ & 1 & 1 & 1 & & 1 & & 1 & & 1 & 1 & 1 & & 1 & 1 & 1 & & 1 & & 1 & & 1 & 1 & 1 & \\ 1 & 1 & & 1 & 1 & & & & & & 1 & 1 & & 1 & 1 & & & & & & 1 & 1 & & 1 & 1 \\ \end{myarray}\right] .$$

We can make a few simplifications to it. For example, we can replace the last two rows with zeros, since the last two rows of \(R\) are zeros. We can also add \(\textbf{n}_1\) or \(\textbf{n}_2\) (or both) to any row of \(K\), while preserving the property that \(R = K M\). But even after these attempts to simplify \(K\), it still remains pretty dense. So even though it gives us a way to solve any board with a single linear operation, it is not easy to find that solution without pen and paper. The good news is that the game remains fun to play by trial and error.

This is as far as I got when trying to apply linear algebra to this game. If you have any further insights, I'd love to hear about them.

Special thanks to YL for helpful suggestions and feedback. Supporting code for this article can be found here.