In Part 1 we talk about how to estimate the score of your game based on a single snapshot. However, we hadn't yet corrected for the fact that $4$'s sometimes spawn. So, to compensate for the number of $4$'s that spawn, we need to know how often they spawn! The online, open-source version has a $10\%$ chance of spawning a $4$, but based on nearly $38,000$ (No, I didn't count them all, I'll show you how I came about this in a little bit.) spawns on my iPhone app version, there is about a $15.3\%$ chance that a $4$ will spawn (leading me to believe that it's supposed to be a $15\%$ chance).
With that in mind, we need to have an estimate for the total number of tiles spawned. And under our old assumption (that only $2$'s spawn), we can compute that very easily. Consider the following. Every $2$ tile that spawns counts as a spawn. Therefore, every $4$ tile that we have is really two total tiles that spawned (the two $2$ tiles). The $8$'s are 4 spawns (two $4$ tiles each with two $2$ tiles spawning). It doesn't take long to figure out that each tiles has
$$c_a = a/2$$
where $a$ is the face value of the tile. Thus we just sum this for each of the $16$ tiles on the board and we have a good estimate for the number of total tiles that spawned, $C_e$
$$C_e = \sum_{16\;Tiles}c_a$$
Well, we just assumed that all the spawns were number $2$ tiles. To compensate for the fact that $4$ tiles can spawn at a probability of $p_4$ we know the following two relationships:
$$p_4 = \frac{C_4}{C_a}$$
$$C_a = C_e - C_4$$
where $C_4, C_a, $ and $C_e$ are the number of $4$ tiles spawned, the actual number of total tiles spawned, and the estimated total tiles spawned, respectively. The first relationship is fairly straight forward; it's how the probability of a four-spawn is defined. However, the second relationship involves a little more thought. We guessed, based on only $2$ tiles spawning that there are $C_e$ total spawns. If we actually know that there are $C_4$ four-spawns, then we have to account for the number of two-spawns that would've smashed into these $4$'s. So, since there are two $2$'s that make a $4$ we have $C_2 = C_e - 2C_4$ two-spawns. To get the actual number of spawns we add the two-spawns to the four-spawns
$$C_a = C_2 + C_4 = (C_e - 2C_4) + C_4 = C_e - C_4$$
Plugging the second equation into the first and solving for $C_4$ we find out that
$$C_4 = \frac{p_4C_e}{1+p_4}$$
Lastly, since each of these $4$'s that spawned does not give us 4 points (we assumed we got the points for all $4$ tiles), our expected score ($S_x$) is the estimate score ($S_e$) minus four times the number of $4$'s that spawned:
$$S_x = S_e - 4C_4 = S_e - 4\frac{p_4C_e}{1+p_4}$$
Since we know $p_4$ and we know how to find $S_e$ (See Part 1) and $C_e$, we can compute our expected score. I played approximately 30 games with this and the average relative error for these guesses is about $0.2\%$. The results were even better once you scored above $10,000$. Only averaging the games with scores higher than $10,000$ (which was all but four of them), the average relative error was about $0.1\%$. Pretty cool results!
I said I would tell you how I came about the probability of a four-spawn of $15.3\%$. So I played a bunch of games and compared the actual score of the game to what I estimated it would be if there were only $2$ tiles that spawned. The difference between these two divided by four is the actual number of $4$ tiles that spawned. With that I used the two equations above and calculated what $p_4$ was!
There are a few more questions about 2048 that I still have. What is the highest score one can get? What is the lowest score? These questions (and others if I think of more) will be discussed in Part 3.
Mathjestic is just what it sounds like: a (poor) combination of the words "math" and "majestic". It was created as a way for me to write-up and share some of the more intriguing math, programming, and engineering topics, problem, or inquiries that I come across.
Monday, June 9, 2014
Friday, May 30, 2014
Exhaustive List of the Sums of Four-Squares
I'd recently been asked to assist on some research on equilateral triangles in $\mathbb{Z}^4$. The project required coming up with a list of all of the lattice points $(i,j,k,l)$ on a hypersphere of radius $\sqrt{d}$ for $d\in\mathbb{N}$. Basically, this reduces to the problem of finding all sets of four integers $(i,j,k,l)$ such that $i^2+j^2+k^2+l^2=d$.
This is an old problem and in 1770 Lagrange proved that every natural number $d$ can be represented as the sum of four squares[1]. So we know that we should always find at least one. Additionally, in 1834 Jacobi determined how many solutions exist[2]: $$N(d)=8\sum_{4\nmid m|d}m$$ We can use that to check to see if our program can find all the points (at least, the right number of points). Of course for the smaller values of $d$, a complete check on the lattice can be performed from $-\sqrt{d}$ to $\sqrt{d}$ in all four cardinal directions. However, this has a complexity of $O(d^2)$.
However, this gets really slow really quickly! Especially if I wanted to run this for $d$ in the tens of thousands. Better results can be obtained when we realize that our $l$ loop is completely unnecessary. Since we would already have a guesses for $i, j,$ and $k$ and we know $d$, we can solve for $l$ and check to see if it is an integer. This reduces complexity to $O(d^{1.5})$. We have to do a few things to make sure that the sqrt() function only takes a positive number and that we include both the positive and negative root, but the updated code is as follows.
Even better results can be obtained if you only check the first orthant and permute the sign of the results into the other 15 orthants. This will run in slightly more than $1/16$ the amount of time, but will still have the same complexity. However, depending on how many first orthant points we find, the permutation can take a significant amount of time and I will address the permutations more in a different post. Looking in the first orthant only, we eliminate the negative root for $l$ in the last line of the nested loops.
However, this is still has the same complexity. For larger values of $d$ something different must be done to reduce complexity! One such way to reduce complexity is to search for only the basic representation of the lattice points in the first orthant. The basic representation is the representation $(i,j,k,l)$ such that $i\geq j\geq k\geq l \geq 0$. For example, rather than finding $\{(2,1,1,0), (2,1,0,1), (2,0,1,1), \dots\}$, we only need to find $(2,1,1,0)$ and permute the order to produce the rest.
This further restricts the bounds on our for loops. For instance, we know that $\sqrt{d/4} \leq i \leq \sqrt{d}$. The upper bound is the same as before, however we have now bounded $i$ from below! To see why, we set $i=j=k=l=i_{min}$. This is $i$ at it's lowest possible value and $j,k,$ and $l$ at their largest. This yields $$d=i^2+j^2+k^2+l^2 = 4{i_{min}}^2\implies i_{min} = \sqrt{d/4} $$ Each of the subsequent indices are then bounded above by both the previous index and the difference between d and the sum of the squares of all the previous indices. For instance, $k \leq j$ and $k \leq d - (i^2 + j^2)$. A stronger statement is therefore $k \leq \mathrm{min}(j, d - (i^2 + j^2))$.
But this is still pretty much the same complexity. I still wanted to get something faster. I did some research online and was unable to find anything. The closest I came was an article in Communications on Pure and Applied Mathematics by Rabin and Shallit[3] that used randomized algorithms to produce a single lattice point such that $x^2+y^2+z^2+w^2=d^2$. While fast (complexity of $O(\log^2 d)$), this method produces only one lattice points and there is no way to guarantee which point you get.
Finally, I stumbled upon a post on the Computer Science Stack Exchange[4] where a user suggested this nearly $O(d)$ algorithm. The way this algorithm works is that is populates a list that is $d+1$ long with pairs of numbers whose squares sum to the index of that particular element. So for the $0$th index we will have the pair $[0,0]$ because $0^2+0^2=0$, for the $5$th index we will have the pair $[2,1]$ because $2^2+1^2=5$, and for the $25$th index we will have the pairs $[5,0]$ and $[4,3]$ because $5^2+0^2=4^2+3^2=25$. Now, we know that the pairs in the $0$th location and the pairs in the $d$th location will have a four-square sum of $d$ and the pairs in the $5$th location and the pairs in the $d-5$th location will have a four-square sum of $d$. We do a meet in the middle loop through this list and we have our basic representations!
For example, consider $d=6$. We have the following list of pairs which we can generation in $O(d)$ time. $$\begin{array}{|c|c|c|c|c|c|c|c|} \hline Index & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline Pairs & [0,0] & [1,0] & [1,1] & \phantom{[0,0]} & [2,0] & [2,1] & \phantom{[0,0]}\\ \hline \end{array}$$ Now look at the pair in index $1$, $[1,0]$ and the pair in index $5$, $[2,1]$. Because $1+5=6$, we know the sum of the squares of all four of the numbers in these two pairs will sum to $6$: $2^2+1^2+1^2+0^2=6$. A wonderfully constructed list! The code for this method is below.
[1] https://en.wikipedia.org/wiki/Lagrange%27s_four-square_theorem
[2] https://en.wikipedia.org/wiki/Jacobi%27s_four-square_theorem
[3] M.O. Rabin, J.O. Shallit, Randomized Algorithms in Number Theory, Communications on Pure and Applied Mathematics (1986), no. S1, pp S239-S256.
[4]http://cs.stackexchange.com/questions/2988/how-fast-can-we-find-all-four-square-combinations-that-sum-to-n
This is an old problem and in 1770 Lagrange proved that every natural number $d$ can be represented as the sum of four squares[1]. So we know that we should always find at least one. Additionally, in 1834 Jacobi determined how many solutions exist[2]: $$N(d)=8\sum_{4\nmid m|d}m$$ We can use that to check to see if our program can find all the points (at least, the right number of points). Of course for the smaller values of $d$, a complete check on the lattice can be performed from $-\sqrt{d}$ to $\sqrt{d}$ in all four cardinal directions. However, this has a complexity of $O(d^2)$.
from math import sqrt def sum_squares(d): sqrt_d = int(sqrt(d)) ## This list will hold all of the points points = [] ## Nested for loops to check ## Add 1 because of the exclusivity of the bound of range() function for i in range(-sqrt_d,sqrt_d + 1): for j in range(-sqrt_d,sqrt_d + 1): for k in range(-sqrt_d,sqrt_d + 1): for l in range(-sqrt_d,sqrt_d + 1): if i**2 + j**2 + k**2 + l**2 == d: points.append([i,j,k,l]) return points
However, this gets really slow really quickly! Especially if I wanted to run this for $d$ in the tens of thousands. Better results can be obtained when we realize that our $l$ loop is completely unnecessary. Since we would already have a guesses for $i, j,$ and $k$ and we know $d$, we can solve for $l$ and check to see if it is an integer. This reduces complexity to $O(d^{1.5})$. We have to do a few things to make sure that the sqrt() function only takes a positive number and that we include both the positive and negative root, but the updated code is as follows.
from math import sqrt def sum_squares(d): ## This list will hold all of the points points = [] ## Nested for loops to check ## Add 1 because of the exclusivity of the bound of range() function for i in range(-sqrt_d,sqrt_d + 1): for j in range(-sqrt_d,sqrt_d + 1): for k in range(-sqrt_d,sqrt_d + 1): l_squared = d - i**2 - j**2 - k**2 if l_squared >= 0: l = sqrt(l_squared) if l.is_integer(): points.append([i,j,k,int(l)]) if l != 0: points.append([i,j,k,-int(l)]) return points
Even better results can be obtained if you only check the first orthant and permute the sign of the results into the other 15 orthants. This will run in slightly more than $1/16$ the amount of time, but will still have the same complexity. However, depending on how many first orthant points we find, the permutation can take a significant amount of time and I will address the permutations more in a different post. Looking in the first orthant only, we eliminate the negative root for $l$ in the last line of the nested loops.
from math import sqrt ## This list contains the 16 sign permutations signs = [[ 1, 1, 1, 1], [ 1, 1, 1,-1], [ 1, 1,-1, 1], [ 1, 1,-1,-1], [ 1,-1, 1, 1], [ 1,-1, 1,-1], [ 1,-1,-1, 1], [ 1,-1,-1,-1], [-1, 1, 1, 1], [-1, 1, 1,-1], [-1, 1,-1, 1], [-1, 1,-1,-1], [-1,-1, 1, 1], [-1,-1, 1,-1], [-1,-1,-1, 1], [-1,-1,-1,-1]] def sum_squares(d): ## Add 1 because of the exclusivity of the bound of range() function sqrt_d = int(sqrt(d)) + 1 ## This list will hold the points in the first orthant points = [] ## This list will hold the points from all orthants final = [] ## Nested for loops to check for i in range(sqrt_d): for j in range(sqrt_d): for k in range(sqrt_d): l_squared = d - i**2 - j**2 - k**2 if l_squared >= 0: l = sqrt(l_squared) if l.is_integer(): points.append([i,j,k,int(l)]) ## For each point found in the first orthant for m in range(len(points)): ## This list will hold all sign permutations for the mth point ## in the first orthant point_signs = [] ## For each of the possible sign permutations for i in range(len(signs)): tmp = [points[m][0] * signs[i][0], points[m][1] * signs[i][1], points[m][2] * signs[i][2], points[m][3] * signs[i][3]] ## To avoid errors with points on the axes ## i.e. 0 * 1 = 0 * -1 if tmp not in point_signs: point_signs.append(tmp) final = final + point_signs return final
However, this is still has the same complexity. For larger values of $d$ something different must be done to reduce complexity! One such way to reduce complexity is to search for only the basic representation of the lattice points in the first orthant. The basic representation is the representation $(i,j,k,l)$ such that $i\geq j\geq k\geq l \geq 0$. For example, rather than finding $\{(2,1,1,0), (2,1,0,1), (2,0,1,1), \dots\}$, we only need to find $(2,1,1,0)$ and permute the order to produce the rest.
This further restricts the bounds on our for loops. For instance, we know that $\sqrt{d/4} \leq i \leq \sqrt{d}$. The upper bound is the same as before, however we have now bounded $i$ from below! To see why, we set $i=j=k=l=i_{min}$. This is $i$ at it's lowest possible value and $j,k,$ and $l$ at their largest. This yields $$d=i^2+j^2+k^2+l^2 = 4{i_{min}}^2\implies i_{min} = \sqrt{d/4} $$ Each of the subsequent indices are then bounded above by both the previous index and the difference between d and the sum of the squares of all the previous indices. For instance, $k \leq j$ and $k \leq d - (i^2 + j^2)$. A stronger statement is therefore $k \leq \mathrm{min}(j, d - (i^2 + j^2))$.
from math import sqrt signs = [[ 1, 1, 1, 1], [ 1, 1, 1,-1], [ 1, 1,-1, 1], [ 1, 1,-1,-1], [ 1,-1, 1, 1], [ 1,-1, 1,-1], [ 1,-1,-1, 1], [ 1,-1,-1,-1], [-1, 1, 1, 1], [-1, 1, 1,-1], [-1, 1,-1, 1], [-1, 1,-1,-1], [-1,-1, 1, 1], [-1,-1, 1,-1], [-1,-1,-1, 1], [-1,-1,-1,-1]] orders = [[0,1,2,3], [0,1,3,2], [0,2,1,3], [0,2,3,1], [0,3,1,2], [0,3,2,1], [1,0,2,3], [1,0,3,2], [1,2,0,3], [1,2,3,0], [1,3,0,2], [1,3,2,0], [2,0,1,3], [2,0,3,1], [2,1,0,3], [2,1,3,0], [2,3,0,1], [2,3,1,0], [3,0,1,2], [3,0,2,1], [3,1,0,2], [3,1,2,0], [3,2,0,1], [3,2,1,0]] def sum_squares(d): ## Add 1 because of the exclusivity of the bound of range() function sqrt_d = int(sqrt(d)) + 1 sqrt_d4 = int(sqrt(d/4)) ## This list will hold the basic points in the first orthant points = [] ## This list will hold the points from all orthants final = [] ## Nested for loops to check for i in range(sqrt_d4, sqrt_d): sum_i = d - i**2 for j in range(min(i,int(sqrt(sum_i)))+1): sum_j = sum_i - j**2 for k in range(min(j,int(sqrt(sum_j)))+1): sum_k = sum_j - k**2 l = int(sqrt(sum_k)) if l**2 == sum_k and l <= k: points.append([i,j,k,l]) ## For each basic point found in the first orthant for m in range(len(points)): ## This list will hold all order permutations for the ## ith sign permutation point_orders = [] ## For each of the possible order permutations for n in range(len(orders)): tmp = [points[m][orders[n][0]], points[m][orders[n][1]], points[m][orders[n][2]], points[m][orders[n][3]]] ## To avoid duplicates who have the same value in two directions ## i.e. [2,1,1,0] = [2,1,1,0] if tmp not in point_orders: point_orders.append(tmp) for n in range(len(point_orders)): ## This list will hold all sign permutations for the mth point ## in the first orthant point_signs = [] ## For each of the possible sign permutations for o in range(len(signs)): tmp = [point_orders[n][0] * signs[o][0], point_orders[n][1] * signs[o][1], point_orders[n][2] * signs[o][2], point_orders[n][3] * signs[o][3]] ## To avoid duplicates with points on the axes ## i.e. 0 * 1 = 0 * -1 if tmp not in point_signs: point_signs.append(tmp) final = final + point_signs return final
But this is still pretty much the same complexity. I still wanted to get something faster. I did some research online and was unable to find anything. The closest I came was an article in Communications on Pure and Applied Mathematics by Rabin and Shallit[3] that used randomized algorithms to produce a single lattice point such that $x^2+y^2+z^2+w^2=d^2$. While fast (complexity of $O(\log^2 d)$), this method produces only one lattice points and there is no way to guarantee which point you get.
Finally, I stumbled upon a post on the Computer Science Stack Exchange[4] where a user suggested this nearly $O(d)$ algorithm. The way this algorithm works is that is populates a list that is $d+1$ long with pairs of numbers whose squares sum to the index of that particular element. So for the $0$th index we will have the pair $[0,0]$ because $0^2+0^2=0$, for the $5$th index we will have the pair $[2,1]$ because $2^2+1^2=5$, and for the $25$th index we will have the pairs $[5,0]$ and $[4,3]$ because $5^2+0^2=4^2+3^2=25$. Now, we know that the pairs in the $0$th location and the pairs in the $d$th location will have a four-square sum of $d$ and the pairs in the $5$th location and the pairs in the $d-5$th location will have a four-square sum of $d$. We do a meet in the middle loop through this list and we have our basic representations!
For example, consider $d=6$. We have the following list of pairs which we can generation in $O(d)$ time. $$\begin{array}{|c|c|c|c|c|c|c|c|} \hline Index & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline Pairs & [0,0] & [1,0] & [1,1] & \phantom{[0,0]} & [2,0] & [2,1] & \phantom{[0,0]}\\ \hline \end{array}$$ Now look at the pair in index $1$, $[1,0]$ and the pair in index $5$, $[2,1]$. Because $1+5=6$, we know the sum of the squares of all four of the numbers in these two pairs will sum to $6$: $2^2+1^2+1^2+0^2=6$. A wonderfully constructed list! The code for this method is below.
from math import sqrt signs = [[ 1, 1, 1, 1], [ 1, 1, 1,-1], [ 1, 1,-1, 1], [ 1, 1,-1,-1], [ 1,-1, 1, 1], [ 1,-1, 1,-1], [ 1,-1,-1, 1], [ 1,-1,-1,-1], [-1, 1, 1, 1], [-1, 1, 1,-1], [-1, 1,-1, 1], [-1, 1,-1,-1], [-1,-1, 1, 1], [-1,-1, 1,-1], [-1,-1,-1, 1], [-1,-1,-1,-1]] orders = [[0,1,2,3], [0,1,3,2], [0,2,1,3], [0,2,3,1], [0,3,1,2], [0,3,2,1], [1,0,2,3], [1,0,3,2], [1,2,0,3], [1,2,3,0], [1,3,0,2], [1,3,2,0], [2,0,1,3], [2,0,3,1], [2,1,0,3], [2,1,3,0], [2,3,0,1], [2,3,1,0], [3,0,1,2], [3,0,2,1], [3,1,0,2], [3,1,2,0], [3,2,0,1], [3,2,1,0]] def sum_squares(d): ## Element i of sum_pairs will be populated with all pairs [a,b] ## such that a, b < sqrt(d) and a*a + b*b = i sum_pairs = [[] for i in range(d+1)] ## Fill sum_pairs for a in range(0,int(sqrt(d)) + 1): for b in range(0,min(a + 1, int(sqrt(d - a**2)) + 1)): i = a*a + b*b if i <= d: sum_pairs[i].append([a,b]) else: break ## This list will be populated with all points [i,j,k,l] ## such that i >= j >= k >= l >= 0 whose squares sum to d points = [] ## This list will hold the points from all orthants final = [] for i in range(0,d//2 + 1): for j in range(0,len(sum_pairs[i])): for k in range(0,len(sum_pairs[d-i])): tmp = [sum_pairs[i][j][0], sum_pairs[i][j][1], sum_pairs[d-i][k][0], sum_pairs[d-i][k][1]] tmp.sort() if tmp not in points: points.append(tmp) ## For each basic point found in the first orthant for m in range(len(points)): ## This list will hold all order permutations for the ## ith sign permutation point_orders = [] ## For each of the possible order permutations for n in range(len(orders)): tmp = [points[m][orders[n][0]], points[m][orders[n][1]], points[m][orders[n][2]], points[m][orders[n][3]]] ## To avoid duplicates who have the same value in two directions ## i.e. [2,1,1,0] = [2,1,1,0] if tmp not in point_orders: point_orders.append(tmp) for n in range(len(point_orders)): ## This list will hold all sign permutations for the mth point ## in the first orthant point_signs = [] ## For each of the possible sign permutations for o in range(len(signs)): tmp = [point_orders[n][0] * signs[o][0], point_orders[n][1] * signs[o][1], point_orders[n][2] * signs[o][2], point_orders[n][3] * signs[o][3]] ## To avoid duplicates with points on the axes ## i.e. 0 * 1 = 0 * -1 if tmp not in point_signs: point_signs.append(tmp) final = final + point_signs return finalUltimately, we've found a very quick way to find all lattice points on the hypersphere of radius $\sqrt{d}$. For the methods that we had to permute (either sign or position), most of the computation time is spent permuting. Therefore, I will spend some time working on a faster way to permute these basic representations while being mindful of repeats. Those methods will be discussed in another post.
[1] https://en.wikipedia.org/wiki/Lagrange%27s_four-square_theorem
[2] https://en.wikipedia.org/wiki/Jacobi%27s_four-square_theorem
[3] M.O. Rabin, J.O. Shallit, Randomized Algorithms in Number Theory, Communications on Pure and Applied Mathematics (1986), no. S1, pp S239-S256.
[4]http://cs.stackexchange.com/questions/2988/how-fast-can-we-find-all-four-square-combinations-that-sum-to-n
Wrong Cancellation Gone Right
I was working on a Project Euler problem a few years back. The problem is essentially asking for you to find all double digit fractions under $1$ where you cancel the same number on the top and bottom. Such as
$$\frac{49}{98}=\frac{4\cancel{9}}{\cancel{9}8}=\frac{4}{8}$$
Of course this is an incorrect method, but curiously, the equality remains. I was able to easily solve the problem with brute force. It's not a particularly computationally intensive problem. But when I shared the fun problem and results with a roommate of mine, he posed an interesting follow-up question. Can we extend this cancellation any further? And basically, we found that
$$\frac{4}{8}=\frac{49}{98}=\frac{499}{998}=\frac{4999}{9998}=\cdots$$
And it turns out that this worked for any number of $9$'s in that location and for any of the two digit fractions that were found. So, we generalize the problem:
For $a,b,c\in\mathbb{N}$ such that $0\leq a,b,c\leq 9$ $$\frac{a}{b}=\frac{10\cdot a+c}{10\cdot c + b} \implies \frac{a}{b}=\frac{10^n\cdot a + 10^{n-1}\cdot c + \cdots + 10 \cdot c +c}{10^n \cdot c + 10^{n-1}\cdot c + \cdots + 10\cdot c + b}$$ Or, in sigma notation $$\frac{a}{b}=\frac{10^n\cdot a + c\cdot \sum_{i=0}^{n-1}{10^i}}{c\cdot \sum_{i=1}^{n}{10^i} + b}$$ Through algebra (cross multiply, group and divide), our premise implies that $$c=\frac{9ab}{10a-b}$$ So, consider the right hand side of the conclusion and substitute the above equality for $c$. $$\frac{10^n\cdot a + c\cdot \sum_{i=0}^{n-1}{10^i}}{c\cdot \sum_{i=1}^{n}{10^i} + b}=\frac{10^n\cdot a + \frac{9ab}{10a-b}\cdot\sum_{i=0}^{n-1}{10^i}}{\frac{9ab}{10a-b}\cdot \sum_{i=1}^{n}{10^i} + b}$$ Multiply top and bottom by $10a-b$ and get $$\frac{\left(10a-b\right)10^n\cdot a + 9ab\cdot\sum_{i=0}^{n-1}{10^i}}{9ab\cdot \sum_{i=1}^{n}{10^i} + \left(10a-b\right)b}$$ Distribute into the parenthesis $$\frac{10^{n+1}\cdot a^2 - 10^n\cdot a b + 9ab\cdot \sum_{i=0}^{n-1}{10^i}}{9ab\cdot\sum_{i=1}^{n}{10^i}+10ab-b^2}$$ Factor out $-ab$ from two terms on the top and $ab$ from two terms on the bottom. $$\frac{10^{n+1}\cdot a^2 - ab\left[10^n - 9\cdot \sum_{i=0}^{n-1}{\left(10^i\right)}\right]}{ab\left[9\cdot\sum_{i=1}^{n}{\left(10^i\right)}+10\right]-b^2}$$ Looking at the top brace, we have $n$ nines subtracted from the number $1$ followed by $n$ zeros, which is just $1$. For example $$\begin{array}{c} 10-9=1\\ 100-99=1\\ 1000-999=1\\ \vdots \end{array}$$ Then looking at the bottom brace, we have $$9\cdot\sum_{i=1}^{n}{\left(10^i\right)}+10=9\cdot\sum_{i=1}^{n}{\left(10^i\right)}+9+1=9\cdot\sum_{i=0}^{n}{\left(10^i\right)}+1$$ which is now $n+1$ nines added to $1$ which will be a $1$ followed by $n+1$ zeros or $10^{n+1}$. For example $$\begin{array}{c} 9+1=10\\ 99+1=100\\ 999+1=1000\\ \vdots \end{array}$$ Substituting these in we get $$\frac{10^{n+1}\cdot a^2 -ab}{ab\cdot 10^{n+1} - b^2}$$ Finally, factor out $a$ on top and $b$ on the bottom and cancel the rest to get $$\frac{a\left(10^{n+1}\cdot a - b\right)}{b\left(10^{n+1}\cdot a - b\right)}=\frac{a}{b}$$ Therefore, if we can cancel the digits in the (incorrect) manner we did, we can add as many of this number as we'd like and still maintain equality.
For $a,b,c\in\mathbb{N}$ such that $0\leq a,b,c\leq 9$ $$\frac{a}{b}=\frac{10\cdot a+c}{10\cdot c + b} \implies \frac{a}{b}=\frac{10^n\cdot a + 10^{n-1}\cdot c + \cdots + 10 \cdot c +c}{10^n \cdot c + 10^{n-1}\cdot c + \cdots + 10\cdot c + b}$$ Or, in sigma notation $$\frac{a}{b}=\frac{10^n\cdot a + c\cdot \sum_{i=0}^{n-1}{10^i}}{c\cdot \sum_{i=1}^{n}{10^i} + b}$$ Through algebra (cross multiply, group and divide), our premise implies that $$c=\frac{9ab}{10a-b}$$ So, consider the right hand side of the conclusion and substitute the above equality for $c$. $$\frac{10^n\cdot a + c\cdot \sum_{i=0}^{n-1}{10^i}}{c\cdot \sum_{i=1}^{n}{10^i} + b}=\frac{10^n\cdot a + \frac{9ab}{10a-b}\cdot\sum_{i=0}^{n-1}{10^i}}{\frac{9ab}{10a-b}\cdot \sum_{i=1}^{n}{10^i} + b}$$ Multiply top and bottom by $10a-b$ and get $$\frac{\left(10a-b\right)10^n\cdot a + 9ab\cdot\sum_{i=0}^{n-1}{10^i}}{9ab\cdot \sum_{i=1}^{n}{10^i} + \left(10a-b\right)b}$$ Distribute into the parenthesis $$\frac{10^{n+1}\cdot a^2 - 10^n\cdot a b + 9ab\cdot \sum_{i=0}^{n-1}{10^i}}{9ab\cdot\sum_{i=1}^{n}{10^i}+10ab-b^2}$$ Factor out $-ab$ from two terms on the top and $ab$ from two terms on the bottom. $$\frac{10^{n+1}\cdot a^2 - ab\left[10^n - 9\cdot \sum_{i=0}^{n-1}{\left(10^i\right)}\right]}{ab\left[9\cdot\sum_{i=1}^{n}{\left(10^i\right)}+10\right]-b^2}$$ Looking at the top brace, we have $n$ nines subtracted from the number $1$ followed by $n$ zeros, which is just $1$. For example $$\begin{array}{c} 10-9=1\\ 100-99=1\\ 1000-999=1\\ \vdots \end{array}$$ Then looking at the bottom brace, we have $$9\cdot\sum_{i=1}^{n}{\left(10^i\right)}+10=9\cdot\sum_{i=1}^{n}{\left(10^i\right)}+9+1=9\cdot\sum_{i=0}^{n}{\left(10^i\right)}+1$$ which is now $n+1$ nines added to $1$ which will be a $1$ followed by $n+1$ zeros or $10^{n+1}$. For example $$\begin{array}{c} 9+1=10\\ 99+1=100\\ 999+1=1000\\ \vdots \end{array}$$ Substituting these in we get $$\frac{10^{n+1}\cdot a^2 -ab}{ab\cdot 10^{n+1} - b^2}$$ Finally, factor out $a$ on top and $b$ on the bottom and cancel the rest to get $$\frac{a\left(10^{n+1}\cdot a - b\right)}{b\left(10^{n+1}\cdot a - b\right)}=\frac{a}{b}$$ Therefore, if we can cancel the digits in the (incorrect) manner we did, we can add as many of this number as we'd like and still maintain equality.
2048 - Scoring (Part 1)
As everyone probably knows, the game $2048$ has become extremely popular (Even my mother plays it!). The game is simple; smash tiles of the same number together to make the next power of two with the goal of reaching the $2048$ tile.
This will be one of many posts I make about this game because I am fascinated with some of the math behind it. One of the first questions I want to answer is: How is the game scored? Well, anyone watching the score as they play can see that when you make the $4$ tile, you get $4$ points. When you make the $1024$ tile, you get $1024$ points. Pretty straight forward. But, in order to make the $1024$ tile (and get your $1024$ points) you need to first make two of the $512$ tiles and get $512 + 512 = 1024$ more points. And to make pair of $512$ tiles, you need to make a total of four 256 tiles.
So, the next question is: What is the actual value for a tile? Basically, how many points have you actually gotten from any given tile? I will first make the assumption that only $2$'s will spawn and will address the incorrectness at a later point. This means that every $4$ tile in play was created by crashing two $2$ tiles and has given us $4$ points. With $s_a$ representing the score of the tile with a face value $a$, we have the recurrence relation $$ s_a=a+2s_{a/2} $$ because we get the value of the tile we create ($a$) and the score of two of the previous tiles ($a/2$). With this relation and the fact that $s_2 = 0$ (because it spawned giving no points) we can tabulate our relation: $$ \begin{array}{|c|c|c|} \hline \mathrm{\mathbf{a}} & \mathrm{\mathbf{Relation}} & \mathrm{\mathbf{s_a}}\\ \hline 2 & 0 & 0\\ 4 & 4+2(0) & 4\\ 8 & 8 + 2(4) & 16\\ 16 & 16 + 2(16) & 48\\ 32 & 32 + 2(48) & 128\\ 64 & 64 + 2(128) & 320\\ 128 & 128 + 2(320) & 768\\ \hline \end{array} $$ We can keep going, but I wanted a better way. Is there a closed form solution for $s_a$? Can we find $s_a$ without finding the score of every tile before it? I was looking for a pattern and came across this: $$ \begin{array}{|c|c|c|} \hline \mathrm{\mathbf{a}} & \mathrm{\mathbf{s_a}} & \mathrm{\mathbf{Pattern}} \\ \hline 2 & 0 & 2(0)\\ 4 & 4 & 4(1)\\ 8 & 16 & 8(2)\\ 16 & 48 & 16(3)\\ 32 & 128 & 32(4)\\ 64 & 320 & 64(5)\\ 128 & 768 & 128(6)\\ \hline \end{array} $$ where $s_a = a\left[\log_2(a)-1\right]$. But is this true in general? Turns out it is true and we can prove it with induction rather quickly. For simpler algebra, we'll define $n$ such that $2^n = a$, or rather, the $n$th tile has a face value of $2^n$. Further, we define $t_n$ to be equal to $s_{2^n}$ and see that our closed form guess is $t_n = (n-1)2^n$ and our recurrence relation is $t_n = 2^n + 2t_{n-1}$.
The base case of $n=1$ is trivially true. We want to show that if $t_n = (n-1)2^n$ then $t_{n+1} = n2^{n+1}$. Consider $t_{n+1}$: $$ t_{n+1} = 2^{n+1} + 2t_n = 2^{n+1} + 2(n-1)2^n = 2^{n+1} + (n-1)2^{n+1} = n2^{n+1} $$ Pretty straight forward! So, at the end of your games you don't have to waste your time and look at the top corner of the screen for your score. You can simply find $s_a$ for all 16 tiles on the screen and add them together! We will denote $S_e$ as the total estimate score for the game: $$S_e = \sum_{16\;Tiles}s_a$$ But, you'll notice that your estimated scores are always higher than the score the game shows (but only by around $2\%$ to $6\%$ based on how high your score is). This is because we assumed we get the points for the $4$ tiles. So how can we correct for the fact that sometimes a $4$ spawns? The answer is in Part 2.
This will be one of many posts I make about this game because I am fascinated with some of the math behind it. One of the first questions I want to answer is: How is the game scored? Well, anyone watching the score as they play can see that when you make the $4$ tile, you get $4$ points. When you make the $1024$ tile, you get $1024$ points. Pretty straight forward. But, in order to make the $1024$ tile (and get your $1024$ points) you need to first make two of the $512$ tiles and get $512 + 512 = 1024$ more points. And to make pair of $512$ tiles, you need to make a total of four 256 tiles.
So, the next question is: What is the actual value for a tile? Basically, how many points have you actually gotten from any given tile? I will first make the assumption that only $2$'s will spawn and will address the incorrectness at a later point. This means that every $4$ tile in play was created by crashing two $2$ tiles and has given us $4$ points. With $s_a$ representing the score of the tile with a face value $a$, we have the recurrence relation $$ s_a=a+2s_{a/2} $$ because we get the value of the tile we create ($a$) and the score of two of the previous tiles ($a/2$). With this relation and the fact that $s_2 = 0$ (because it spawned giving no points) we can tabulate our relation: $$ \begin{array}{|c|c|c|} \hline \mathrm{\mathbf{a}} & \mathrm{\mathbf{Relation}} & \mathrm{\mathbf{s_a}}\\ \hline 2 & 0 & 0\\ 4 & 4+2(0) & 4\\ 8 & 8 + 2(4) & 16\\ 16 & 16 + 2(16) & 48\\ 32 & 32 + 2(48) & 128\\ 64 & 64 + 2(128) & 320\\ 128 & 128 + 2(320) & 768\\ \hline \end{array} $$ We can keep going, but I wanted a better way. Is there a closed form solution for $s_a$? Can we find $s_a$ without finding the score of every tile before it? I was looking for a pattern and came across this: $$ \begin{array}{|c|c|c|} \hline \mathrm{\mathbf{a}} & \mathrm{\mathbf{s_a}} & \mathrm{\mathbf{Pattern}} \\ \hline 2 & 0 & 2(0)\\ 4 & 4 & 4(1)\\ 8 & 16 & 8(2)\\ 16 & 48 & 16(3)\\ 32 & 128 & 32(4)\\ 64 & 320 & 64(5)\\ 128 & 768 & 128(6)\\ \hline \end{array} $$ where $s_a = a\left[\log_2(a)-1\right]$. But is this true in general? Turns out it is true and we can prove it with induction rather quickly. For simpler algebra, we'll define $n$ such that $2^n = a$, or rather, the $n$th tile has a face value of $2^n$. Further, we define $t_n$ to be equal to $s_{2^n}$ and see that our closed form guess is $t_n = (n-1)2^n$ and our recurrence relation is $t_n = 2^n + 2t_{n-1}$.
The base case of $n=1$ is trivially true. We want to show that if $t_n = (n-1)2^n$ then $t_{n+1} = n2^{n+1}$. Consider $t_{n+1}$: $$ t_{n+1} = 2^{n+1} + 2t_n = 2^{n+1} + 2(n-1)2^n = 2^{n+1} + (n-1)2^{n+1} = n2^{n+1} $$ Pretty straight forward! So, at the end of your games you don't have to waste your time and look at the top corner of the screen for your score. You can simply find $s_a$ for all 16 tiles on the screen and add them together! We will denote $S_e$ as the total estimate score for the game: $$S_e = \sum_{16\;Tiles}s_a$$ But, you'll notice that your estimated scores are always higher than the score the game shows (but only by around $2\%$ to $6\%$ based on how high your score is). This is because we assumed we get the points for the $4$ tiles. So how can we correct for the fact that sometimes a $4$ spawns? The answer is in Part 2.
Saturday, May 3, 2014
Converging Sequence of Unusually Summed Fractions
While reading Measurement by Paul Lockhart, I came across his explanation for the length of the diagonal in the unit square. He first takes some "guesses" for the number $d$ that squares to $2$:
$$\frac{3}{2}, \frac{7}{5}, \frac{10}{7}, \frac{17}{12}$$
Of course these are incorrect, but the sequence intrigued me. I was wondering why he picked these particular guesses. I noticed a pattern; the numerator of each term is the sum of the numerators of the previous two terms and the denominator of each term is the sum of the denominators of the previous two terms. I thought that maybe if we followed the same pattern we may converge to $\sqrt{2}$.
$$
\begin{array}{|c|c|}
\hline
\mathrm{\mathbf{Fraction}} & \mathrm{\mathbf{Approx.}} \\
\hline
3/2 & 1.50000\\
7/5 & 1.40000\\
10/7 & 1.42857\\
17/12 & 1.41667\\
27/19 & 1.42105\\
44/31 & 1.41935\\
71/50 & 1.42000\\
115/81 & 1.41975\\
186/131 & 1.41985\\
301/212 & 1.41981\\
487/343 & 1.41983\\
788/555 & 1.41982\\
\hline
\end{array}
$$
Clearly, this is not approaching $\sqrt{2}$. But it does appear to be converging to something close to $1.41982$. What exactly is going on here?
Let's generalize the problem a little bit more: Define the sequences $\{a_n\}$ and $\{b_n\}$ such that $a_n = a_{n-1} + a_{n-2}$ and $b_n = b_{n-1} + b_{n-2}$, with $a_1 \geq a_0 > 0$ and $b_1 \geq b_0>0$ given. Define another sequence $\{c_n\}$ such that $c_n = a_n/b_n$.
Some questions of particular interest include: Does $\{c_n\}$ always converge? If it converges, what is $\lim_{n\to\infty}c_n$? Can we find a closed form solution for $c_n$?
The first thing I noticed is that $c_n$ is between $c_{n-1}$ and $c_{n-2}$. To show this, assume, without loss of generality, that $c_{n-2}\leq c_{n-1}$. This means that $$ \frac{a_{n-2}}{b_{n-2}} \leq \frac{a_{n-1}}{b_{n-1}} \implies a_{n-2}b_{n-1} \leq a_{n-1}b_{n-2} $$ Consider $a_{n-2}(b_{n-2}+b_{n-1})$. $$ \begin{array}{r l} a_{n-2}(b_{n-2}+b_{n-1}) & = a_{n-2}b_{n-2}+a_{n-2}b_{n-1} \\ & \leq a_{n-2}b_{n-2} + b_{n-2}a_{n-1} \\ & = b_{n-2}(a_{n-2}+a_{n-1}) \end{array}$$ And from here it's clear to see the following inequality $$\frac{a_{n-2}}{b_{n-2}} \leq \frac{a_{n-2}+a_{n-1}}{b_{n-2}+b_{n-2}} \implies c_{n-2} \leq c_{n}$$ Following similarly, we can look at $b_{n-1}(a_{n-2}+a_{n-1})$ to show $c_{n} \leq c_{n-1}$. Therefore we have $$ c_{n-2} \leq c_{n} \leq c_{n-1} $$ and we can conclude that $c_n$ is always between $c_{n-1}$ and $c_{n-2}$ (the other direction of "between" comes from the opposite assumption that $c_{n-2}\geq c_{n-1}$).
While an interesting result, this doesn't necessarily put us any closer to showing convergence. I attempted to bound the sequence between the subsequence of even indices (which is decreasing) and the subsequence of odd indices (which is increasing) and squeeze out a limit. I was able to prove the bound existed, but I was not able to prove convergence for either of the two subsequences. I may share some of this work in a later post, but this direction wasn't leading anywhere. So, I temporarily gave up on showing convergence and moved onto coming up with a closed form for $c_n$. And in searching for a closed form solution, I found the following proof for both a closed form solution and the convergence.
For notational purposes, I will index the Fibonacci sequence, $\{0,1,1,2,3,5,\ldots\}$, as follows: $$F_0 =0, \; F_1 = 1, \; F_2 = 1, \; F_3 = 2, \; F_4 = 3, \; F_5 = 5, \ldots$$ I will first show that $a_n = F_n a_1 + F_{n-1} a_0$ for all $n\geq 2$. Proceeding by induction, the base case of $n=2$ works out: $$a_2 = a_1 + a_0 = 1 \cdot a_1 + 1 \cdot a_0 = F_2 a_1 + F_1 a_0$$ Now, assume that $a_k = F_k a_1 + F_{k-1}a_0$ for all $2\leq k\leq n$. And we want to show that $a_{n+1} = F_{n+1}a_1 + F_n a_0$. $$ \begin{array}{rl} a_{n+1}& = a_n + a_{n-1} \\ &= \left(F_na_1 + F_{n-1}a_0\right) + \left(F_{n-1}a_1 + F_{n-2}a_0\right) \\ &= \left(F_n + F_{n-1}\right)a_1+\left(F_{n-1}+F_{n-2}\right)a_0 \\ &= F_{n+1}a_1 + F_n a_0 \end{array} $$ In the same manner we can show that $b_n = F_n b_1 + F_{n-1} b_0$. Based on this, we can show that $$c_n = \frac{a_n}{b_n} = \frac{F_n a_1 + F_{n-1} a_0}{F_n b_1 + F_{n-1} b_0}\cdot\frac{1/F_{n-1}}{1/F_{n-1}}=\frac{\left(F_n/F_{n-1}\right)a_1+a_0}{\left(F_n/F_{n-1}\right)b_1+b_0}$$ Based on the well known result that the ratio between adjacent Fibonacci numbers approaches the golden ratio, $\phi$, we can conclude that $$\lim_{n\to\infty}c_n = \frac{\phi a_1 + a_0}{\phi b_1 + b_0}$$ We have shown convergence!
Let's reconsider the original problem. Recall that $a_0 = 3, a_1 = 7, b_0 = 2, $ and $b_1 = 5$. Our convergence theorem shows that $$\lim_{n\to\infty}c_n = \frac{7\phi +3}{5\phi+2} \approx 1.41982$$ Exactly what we saw happening! One more interesting result that I found along the way is that $$\lim_{n\to\infty}\frac{a_n}{a_{n-1}}= \lim_{n\to\infty}\frac{b_n}{b_{n-1}}=\phi$$ Basically, any two positive starting numbers in a Fibonacci-esque sequence will always have an adjacent term ratio of $\phi$, just like the Fibonacci sequence.
Let's generalize the problem a little bit more: Define the sequences $\{a_n\}$ and $\{b_n\}$ such that $a_n = a_{n-1} + a_{n-2}$ and $b_n = b_{n-1} + b_{n-2}$, with $a_1 \geq a_0 > 0$ and $b_1 \geq b_0>0$ given. Define another sequence $\{c_n\}$ such that $c_n = a_n/b_n$.
Some questions of particular interest include: Does $\{c_n\}$ always converge? If it converges, what is $\lim_{n\to\infty}c_n$? Can we find a closed form solution for $c_n$?
The first thing I noticed is that $c_n$ is between $c_{n-1}$ and $c_{n-2}$. To show this, assume, without loss of generality, that $c_{n-2}\leq c_{n-1}$. This means that $$ \frac{a_{n-2}}{b_{n-2}} \leq \frac{a_{n-1}}{b_{n-1}} \implies a_{n-2}b_{n-1} \leq a_{n-1}b_{n-2} $$ Consider $a_{n-2}(b_{n-2}+b_{n-1})$. $$ \begin{array}{r l} a_{n-2}(b_{n-2}+b_{n-1}) & = a_{n-2}b_{n-2}+a_{n-2}b_{n-1} \\ & \leq a_{n-2}b_{n-2} + b_{n-2}a_{n-1} \\ & = b_{n-2}(a_{n-2}+a_{n-1}) \end{array}$$ And from here it's clear to see the following inequality $$\frac{a_{n-2}}{b_{n-2}} \leq \frac{a_{n-2}+a_{n-1}}{b_{n-2}+b_{n-2}} \implies c_{n-2} \leq c_{n}$$ Following similarly, we can look at $b_{n-1}(a_{n-2}+a_{n-1})$ to show $c_{n} \leq c_{n-1}$. Therefore we have $$ c_{n-2} \leq c_{n} \leq c_{n-1} $$ and we can conclude that $c_n$ is always between $c_{n-1}$ and $c_{n-2}$ (the other direction of "between" comes from the opposite assumption that $c_{n-2}\geq c_{n-1}$).
While an interesting result, this doesn't necessarily put us any closer to showing convergence. I attempted to bound the sequence between the subsequence of even indices (which is decreasing) and the subsequence of odd indices (which is increasing) and squeeze out a limit. I was able to prove the bound existed, but I was not able to prove convergence for either of the two subsequences. I may share some of this work in a later post, but this direction wasn't leading anywhere. So, I temporarily gave up on showing convergence and moved onto coming up with a closed form for $c_n$. And in searching for a closed form solution, I found the following proof for both a closed form solution and the convergence.
For notational purposes, I will index the Fibonacci sequence, $\{0,1,1,2,3,5,\ldots\}$, as follows: $$F_0 =0, \; F_1 = 1, \; F_2 = 1, \; F_3 = 2, \; F_4 = 3, \; F_5 = 5, \ldots$$ I will first show that $a_n = F_n a_1 + F_{n-1} a_0$ for all $n\geq 2$. Proceeding by induction, the base case of $n=2$ works out: $$a_2 = a_1 + a_0 = 1 \cdot a_1 + 1 \cdot a_0 = F_2 a_1 + F_1 a_0$$ Now, assume that $a_k = F_k a_1 + F_{k-1}a_0$ for all $2\leq k\leq n$. And we want to show that $a_{n+1} = F_{n+1}a_1 + F_n a_0$. $$ \begin{array}{rl} a_{n+1}& = a_n + a_{n-1} \\ &= \left(F_na_1 + F_{n-1}a_0\right) + \left(F_{n-1}a_1 + F_{n-2}a_0\right) \\ &= \left(F_n + F_{n-1}\right)a_1+\left(F_{n-1}+F_{n-2}\right)a_0 \\ &= F_{n+1}a_1 + F_n a_0 \end{array} $$ In the same manner we can show that $b_n = F_n b_1 + F_{n-1} b_0$. Based on this, we can show that $$c_n = \frac{a_n}{b_n} = \frac{F_n a_1 + F_{n-1} a_0}{F_n b_1 + F_{n-1} b_0}\cdot\frac{1/F_{n-1}}{1/F_{n-1}}=\frac{\left(F_n/F_{n-1}\right)a_1+a_0}{\left(F_n/F_{n-1}\right)b_1+b_0}$$ Based on the well known result that the ratio between adjacent Fibonacci numbers approaches the golden ratio, $\phi$, we can conclude that $$\lim_{n\to\infty}c_n = \frac{\phi a_1 + a_0}{\phi b_1 + b_0}$$ We have shown convergence!
Let's reconsider the original problem. Recall that $a_0 = 3, a_1 = 7, b_0 = 2, $ and $b_1 = 5$. Our convergence theorem shows that $$\lim_{n\to\infty}c_n = \frac{7\phi +3}{5\phi+2} \approx 1.41982$$ Exactly what we saw happening! One more interesting result that I found along the way is that $$\lim_{n\to\infty}\frac{a_n}{a_{n-1}}= \lim_{n\to\infty}\frac{b_n}{b_{n-1}}=\phi$$ Basically, any two positive starting numbers in a Fibonacci-esque sequence will always have an adjacent term ratio of $\phi$, just like the Fibonacci sequence.
Tuesday, April 15, 2014
Introduction
Mathjestic is just what it sounds like: a (poor) combination of the words "math" and "majestic". It was created as a way for me to write-up and share some of the more intriguing math, programming, and engineering topics, problem, or inquiries that I come across in my academic pursuit.
My undergraduate background is in mechanical engineering and mathematics, but I have a great deal of experience in programming, as well. Topics will tend to focus around these three subject areas, but if I find content worthy (and that I'm knowledgeable enough about) then I will add them as well.
My undergraduate background is in mechanical engineering and mathematics, but I have a great deal of experience in programming, as well. Topics will tend to focus around these three subject areas, but if I find content worthy (and that I'm knowledgeable enough about) then I will add them as well.
Subscribe to:
Posts (Atom)