Uther Party is a custom mini-game map for the game Warcraft 3. Our local university LAN club has a weird obsession with it. Basically there is a box with a number on it, and the game is to not be holding the box when it lands on 0. You pass the box to the person next to you by pressing either Q, which decrements it by 1 or W which decrements it by 2. If the box is on 1 and you decrement it by 2, it explodes before you pass it.
The game is pretty frustrating, so I decided to make a script to tell me the optimal choice to make. The script works by generating a minmax tree of all possible choices. Each leaf has an array the size of the amount of players, which has a 1 in the field if the player doesn’t die in that playthrough or a 0 if they do. It then goes up the tree with each level of the tree corresponding to a different players turn which attempts to maximise their index in the array. If the two choices have the same chance of death, it takes the average of both children’s chances.
The solver works pretty well, however it assumes that the players aren’t colluding and are only looking to maximise their own score which is a fair assumption, but may not be the case 100% of the time.
You can download the solver here (it has prompt, command and GUI modes). It only requires python, but optionally can use ete2 to print out a graph of the tree if you are really sadistic.
If you want to try it out, I’ve also made a module for the ComSSA IRC bot to facilitate a text based game of mystery box.
Pong is a classic game, you could even say it is the classic game. I have been looking for things to help me learn Scratch recently, which is used to teach kids to code, as well as things to help me procrastinate exam study. So I decided to see if I could make the best damn Pong AI in Scratch that I could. This isn’t exactly simple, Scratch doesn’t really have “functions” like a regular programming language, so things get messy really quick. As Wikipedia says;
There is a strong contrast between the powerful multimedia functions and multi-threaded programming style and the rather limited scope of the Scratch programming language.
The Definition
So before we start, let’s define the world and its parameters. I have just come fresh out of an AI unit which did not have nearly enough coding, so I am itching to actually apply my knowledge. We will start off with a PEAS definition:
Performance: We want to minimize the number of balls which we miss, and maximize the number of balls which the opponent misses. AKA we want Max(Computer Score – Player Score).
Environment: We are playing in a 480×360 playing field. The top and bottom are walls where the ball will bounce off and when the ball hits the sides that player loses. When hitting a wall, the ball’s Y vector is reflected across the X axis. When hitting a players paddle, the paddles velocity is added to the Y velocity (assuming the Y velocity of the ball isn’t the maximum) and then the remainder speed is the X velocity (going in the reverse direction). The ball is always going at a constant speed. Lets have a look at the environment characteristics, The world is:
Fully Observable: All variables are available to the AI (Scratch hasn’t really caught on to Information Hiding principals)
Deterministic: The only thing the AI cannot predict is the movement of the Player Paddle, everything it can figure out (there is no randomness)
Episodic: After each paddle hit the AI can practically forget everything that just happened.
Dynamic: The Paddles and the Ball are all constantly moving.
Continuous: Because positions and velocities can go down to the decimal, the data is continuous, this isn’t a big deal though because it doesn’t actually affect the decisions much (you can just round).
Multiple Agent: There are two agents, the Player (a person) and the Computer (the AI). They are competitive agents, one agent succeeding means the other agent fails.
Actuators: We have two actuators in this game, the up accelerator which when on it increases velocity until it reaches the maximum and the down accelerator which decreases velocity until it reaches the negative maximum
Sensors: We can sense the X and Y coordinates of both paddles and the ball. We can also sense the Y velocity of both paddles and the X and Y velocity of the ball.
The Agents
I developed four different pong agents, each building on the lessons from the last one. I also updated and refined the world in which they interact. The first agent uses very hacky code such as bounce “when touching color red”. For a more complex AI to work well, we will need the game to be as deterministic as possible.
The Reflex Agents
Reflex agents are pretty simple. They read in sensor data and react to it. In our case my first agent will read in the position of the ball and if it is above, fire the up accelerator and if it is below fire the down accelerator. This is a very naive approach. It works well when the Y velocity of the ball is less than the maximum Y velocity of the paddle, but once it goes over, the paddle can’t cope. Furthermore, the paddle can’t just change directions, it must first decelerate. This means a ball with a high Y velocity will whiz past the paddle before it can turn around.
We can definitely do better than that. One issue is that the paddle starts following the ball too early and goes too far up or down to get back. So in the second agent, the paddle returns to the centre after hitting the ball and then only following the ball when it passes a certain point. This doesn’t really help it that much, and the steep Y moves still get the better of it. It looks smarter though.
The Model Based Agent
So it seems a bit stupid to have the paddle start moving upwards if the ball ends up below it. The thing is, we can figure out where the ball ends up using data gained from our “sensors”. Now the agent can just calculate the position the ball will hit based on its vector and move into position. This is almost perfect. I had to change the world around to keep the challenge on the agent. Now when the ball hits a paddle, its speed increases. This means the eventually the paddle won’t be able to get to it in time if it is too far away. I also cleaned up quite a bit of the code, relying more on variables and less on hit boxes to make it more consistent.
See if you can beat the agent. It’s interesting that the big issue for the first two agents, balls with high Y velocity, are a walk in the park for this agent. What this agent really struggles with is balls with a high X velocity aimed at its corners.
The Utility Based Agent
So now the AI is pretty good, but there is one problem. Right now the agent only plays defense, which makes it rely on the Player making a mistake. As the cheesy movie actors say, the best defense is a good offense. In order to allow a good offense, we want to design a utility based agent.
Utility based agents have a happiness function, which evaluates how happy a particular action will make the agent. The best offensive moves are moves that get the ball the furthest away from the other player with the highest X velocity, hopefully making it impossible for the Player to reach the ball on time. Note that actually hitting the ball isn’t a part of the happiness function – any plan which involves the AI missing the ball is immediately thrown out. Happiness is based on increasing our chances of winning and if we miss the ball the chance of winning is zero.
There are only 21 different velocities a paddle can hit the ball at, so to plan a move, we just generate all the states (assuming the ball would be able to be hit), test to see which will make the agent the happiest and then make a plan for the agent to follow. If the plan reaches a dead end, it finds the next happiest state. If it can’t find a state where it reaches the ball, it is quite likely the ball is too far away and going too fast. Having the paddle sit there without a plan looks a bit weird to a Player, they want to see it try and fail! So if it doesn’t find a state, it goes to the default state, which is moving as fast as possible toward the point where the ball is predicted to land. The agent would think this is futile, but it must appease the silly humans. And who knows, maybe a rounding error could mean it actually just makes it.
This agent is the most polished and challenging by far. I have also cleaned up the code some more, while simultaneously making it super dirty. Give it a go!
So that’s the best AI we have so far. If anyone has any comments on improving it (or if you found any bugs), or if you would like me to explain in more detail the algorithms used, leave a comment! When I have time, I plan on making it semi modular, so I can pit AIs against each other.
If you want to view that code for these projects, check them out on Scratch [1][2][3][4].
Sigh. If only we had this kind of programming integrated in our Artificial Machine Intelligence unit.
Today was the last day of linux.conf.au :(. Arguably some of my favorite talks were today.
This is a last continuation of this post here about my first Linux.conf.au, here.
The Keynote today was Deploying software updates to ArduSat in orbit by Jonathan Oxer [ Video ]. Basically this was a project which got it’s funding from kickstarter which allows low-cost experiments to occur on arduinos on cube satellites in Lower Earth Orbit. This really was a the future is now type talk. Never before would you have thought regular people would have a chance to do anything with space other than look at it.
Raspberry Pi Hacks: Building Great Pi Projectsby Ruth Suehle and Tom Callaway [ Video ] was a pretty light talk about some different projects the speakers and others had achieved with the raspberry pi. It’s actually convinced me to give the pi another shot. I bought one when they first came out and used one for a bit as a torrent server however I found it kinda sucked with stability but I think part of that now has to do with my power adapter (apparently they have to be 5v 1A regulated) and the fact it was the original model (apparently things have been fixed in the newer ones). They also had written a book on the pi for sale here (the discount code AUTHD gives you 40% off!)
The Rust language: memory, ownership and lifetimesby Nicholas Matsakis [ Video ] was an introduction to the new Rust language being developed by Mozilla. Now I’m not an expert on rust, but from the talk my impression is that it is C++ with some extra checks to avoid memory management issues. Basically you have an owner (like a variable/pointer) of a piece of memory that the owner can lend memory to others or transfer ownership. This prevents aliasing and means there doesn’t need to be garbage collection which worries about reference counting and other things. Obviously there are some other additions, but this is the one that was mainly emphasised. Rust sounds like an interesting language, I hate memory issues as much as the next guy. However, one issue I had with this talk was that programming languages were evaluated on this zero sum balancing act between safety and control. C++ gives you lots of control but sucks at safety. Haskell is very safe, you have basically no side effects but you don’t have much control or freedom which is why it isn’t used much. Python is pitched as being somewhere in the middle. Rust however is pitched at being simultaneously giving lots of control and lots of safety, which I feel like was extending the truth a bit. Rust gives safety through the ownership system which forbids dangerous things like aliasing and shared memory. Rust offers C++ like freedom by having unsafe blocks, which do allow these things. Clearly an unsafe block is not safe. So can a language offer high control and high safety at the same time? I’m unconvinced, but I haven’t used rust, so maybe somebody could comment on how it makes their programs more safe. It seems to me like Rust mostly just enforces a commenting policy of marking areas of code where memory management is messy. I do believe Rust has the potential, because of these rules, to improve code readability and debugging, but only compared to poorly documented code.
Pettycoin: Losing Tiny Amounts of Bitcoin At Scale!by Rusty Russell [ Video ] – I was looking forward to this talk. I’m a fan of the Bitcoin protocol so generally new ideas excite me, plus Rusty Russell is super nice to newcomers which is always great! Rusty also admitted the idea and implementation is in it’s infancy. Firstly, let me say I like that he is thinking out of the box and I am so glad there are people out there making things that aren’t more alt-coins. Stop making alt-coins people! Rusty talks about some good ideas. Sharding the network is pretty smart, so is sending out double spend alerts and shirking transaction size. However it is fundamentally flawed. The reason is Gateways. Bitcoin is supposed to be decentralized and gateways are centralised. It seems like the whole protocol relies on gateways and because of this I don’t see any advantage this would have over sending your bitcoins to a dedicated, centralised micro-payment service (like paypal lite or something). I would even argue a dedicated micro-payment service would be more secure as there is no risk of double spends, only traditional hacking.
Finally there were lighting talks at the end. Mostly used as an opportunity for people to spruke their open source project or group, these are usually hit and miss. I’m excited I discovered DLect after mostly giving up on my blackboard scraper project, it turns out somebody from University of Queensland has had the same idea but gotten a lot further. Hopefully I can get it working with Curtin. There were also a couple of talks on Bitcoin. Lightning talks are free as in free speech, so they’re sometimes not the greatest. Here is my quick guide if you’re interested in spreading the word of bitcoin.
Don’t talk about it like it’s an investment or stock
Don’t talk about it like it’s going to make money (saving money (from fees) is OK) Don’t be misleading by refering to thousands of % growth
Don’t talk about how it’s a “store of value”
Do talk about the lack of fees for transferring Do talk about the decentralised nature of the protocol Do talk about the safeguards against hacking (eg Proof of Work)
There tend to be two views on bitcoin, one is that it is a ponzi scheme and one that it is the future of money transfer. I think it’s both. There is no doubt that a lot of ‘early adopters’ will become very rich (or have done already) if bitcoin becomes mainstream, and many of them have become mouthpieces for bitcoin, but I think this does not detract from the theoretical qualities of bitcoin and that the protocol will still exist even when all the scheming has died down and the ponzi schemers have lost all their money.
So after that slightly divergent rant, LCA2014 is now at a close. LCA2015 was announced to be held in Auckland so maybe I’ll see everybody there next year! I’ve had a great time and I recommend any other student interested in Linux to take advantage of the cheap student entry.
It’s had now been 4 days since the conference began (I’m writing this in the past perspective because I’ve been lazy.). It’s only the second day of the “real” conference, where all the long, meat and bones talks were held.
This is a continuation of this post here about my first linux.conf.au, here.
First up we had Matthew Garrett deliver a great keynote speech, which was expected. I’d watched his talk the day before and I definitely wasn’t going to miss this. Once again he totally nailed it. In this talk he explored the controversial UEFI Secure Boot. You see, secure boot is a good thing, because it stops a computer whose boot has been compromised by a virus. The problem the free software community has with it is that there isn’t provisions for intentionally changing the boot image, (for instance when installing Linux). Other scenarios in different but comparable fields include Android phones who (sometimes) allow you change the boot image, but only if you turn off boot verification completely thus making a modified OS more vulnerable to attacks. Ideally there would be a way to self sign packages allowing them to boot without turning off secure boot. Here’s looking at you, hardware manufacturers. Watch to video here.
Programming Diversityby Ashe Dryden [ Video ] was not just about diversity with regards to women but other unrepresented people in the programming world too. This presentation was really well done, the slides were great and everything was referenced. This is my favorite quote from the talk and it really resonated with me because I’ve had this argument with people in the tech field way too many times that it is rather depressing so whenever I see someone championing diversity in the field I have a glisten of hope.
Then you might be thinking well “maybe women just aren’t biologically predisposed to programming”. You know maybe there’s just something in their brains that just makes them not good at programming, not good at writing algorithms or figuring this stuff out. There’s got to be, there has to be some kind of science to explain why we have this problem. But we actually do have science… in the exact opposite direction. There is no physical or biological difference that shows that anybody, regardless of their race or gender, is better at being a programmer. There is none. And a lot of the things that we talk about when we talk about things like evolutionary biology, which is what this question kind of hinges on, is the fact that you assume that there is some kind of addition that you get to being person for being a programmer. As if you can outrun a cheetah because you are a good programmer.
From Kookaburra to the Cloud: where to now for copyright in Australiaby Ben Powell [ Video ] was a very informative talk on the efforts to introduce a US style fair use to replace the current fair dealing. In summary (although I do recommend you watch the talk) fair dealing lists certain situations in which copyrighted material can be use without getting permission from the copyright holder, while fair dealing lists certain (usually four) factors which are weighed up against each other to decide whether or not the use of unlicensed copyrighted material was ‘fair’. The difference being that fair use is dealt with on a case by case basis which tends to keep up with the times more than fair dealing. For instance, one ludicrous example is that we are allowed to copy video from VHS to other devices but not DVDs. This kind of copyright law could allow court proceedings to reflect common sense instead of shoehorning outdated laws into current situations.
Below The Line: Fixing The Voting Process With Technologyby Benno Rice [ Video ] followed Benno’s story of developing a web-app to try and combat the somewhat dirty tricks that candidates play in senate elections to try and get in by allowing people to preplan their below the line vote. I commend Benno for keeping his apolitical hat on throughout the entire process, thus strengthening democracy rather than strengthening a particular party. These kinds of talks frustrate me because while Benno has made a difference, I wish he didn’t have too. Clearly if somebody needs to make an unofficial web app in order to make our elections fair something is wrong and needs to be changed stat.
Disaster Recovery Lessons I Hoped I’d Never Have to Learnby Bdale Garbee [ Video ] made me and I think everybody else who saw it immediately evaluate our current backup procedure and almost certainly conclude it needs to be better. Seriously, watch it.
Finally I ended up at Introduction to Goby Mark Smith [ Video ] was an introduction to Google’s recently developed programming language Go. Having never used go, I’m pretty amazed at the fact that in the tutorial we made a multi-threaded chat program in about 45 lines. Go is a powerful language and as someone who hates locks/mutex management, I think I’m going to try learn some more. Another thing I like about Go is that the executable are completely self contained. Not having a interpreter or JVM needed is a real bonus when you might be distributing to someone who might not have them installed.
Wow. I now realise the last two days were just a warmup. The Conference really came into it’s own today. With talks going about twice as long as the previous two days, we really got some in-depth information from some top notch speakers.
This is a continuation of this post here about my first Linux.conf.au, here.
It started off with the lightning talks in lieu of a keynote speaker. I was a bit disappointed that there weren’t more of them (they finished well before scheduled) but the ones that were there were very well spoken about things ranging from Raising Geek Girls to Password Storage best-practices.
Next up I attended the Python 3: Making the Leap!by Tim Leslie tutorial. This was a great tutorial where Tim went through the nuances of the differences between the two versions of python and how the -3 flag and the 2to3.py tool can be used to convert a 2.xx script to python 3. I found this particularly helpful as someone who recent started having a go at python with my (disclaimer: linking to horrible, horrible code) blackboard scraper I recently wrote. For some reason despite all my dependencies working with python 3 I wrote it in 2.7. Learning to write pretty much depreciated code was probably not the most intelligent thing to do, but at least now I have a good explanation on what is different when writing code for 3 in the future.
Next up was Building Effective Alliances around the Trans-Pacific Partnership Agreementby Sky Croeser [ Video ]. The great thing about this talk is that it went beyond saying why the TPPA is bad, which is all we ever seem to hear from tech news sites, and actually delved into strategies of how to effectively combat. The strategies were diverse with no possibility left unturned, from writing letters to politicians to 1999 Seattle WTO Protest tactics – fun for the whole family! I thought the way she approached the topic was terrific, as a person who is not in the slightest bit interested in radical activism, but a strong interest in political issues such as this, it was nice to be acknowledged as somebody who can still contribute. If you’re at all worried about TPPA, I would strongly recommend watching the talk when it’s uploaded on Monday.
Bringing MoreWomen to Free and Open Source Softwareby Karen Sandler [ Video ] was a really eye opening talk on the strategies used at the GNOME Foundation the bring their amount of female contributors up to par with the rest of the Computing sector (3% compared to 18%, still a woefully small number). The lack of women in computing is quite a serious problem which I think is overlooked (and sometimes even perpetuated) by many computer scientists and programmers, and perhaps society in general. I’m fairly certain (and very hopeful) this is a changing statistic I’ve met many people who are committed to the issue.
Opening up government databy Pia Waugh [ Video ] continued from Monday’s open government miniconf about making government datasets available freely to the public. As I mentioned on Monday, Pia Waugh is definitely someone who is “on our side” in the government sector relating to open data and is in charge of the great data.gov.au website where you can request and vote for certain datasets to be release. She talked about certain issues related to open government data including ways to automate data collection and uploading (which would always provide up-to-date-data) and issues with the format in which data is provided. I love playing to datasets so this talk was very interesting to me, I would recommend giving it a watch when it’s online.
Reverse engineering vendor firmware drivers for little fun and no profitby Matthew Garrett [ Video ]* would have to be the best talk of the day (and possibly the week). In this talk we follow the protagonist Garrett as he embarks on a journey to reverse engineer a vendor tool for modifying servers which doesn’t quite go as expected. I won’t go any further because you have to watch it to really appreciate. The thing I really like about Garrett’s talk is not only the sprinkles of humor paced throughout, but the fact that whenever he says anything that isn’t basic computer/Linux knowledge, he stops and gives a short explanation of what that is. One mistake I feel like many speakers have is that they assume every attendee is as knowledgeable as they are in the domain they are speaking about when in reality there are people of all different types of skillsets. By not stopping to explain things you run the risk of having half your audience feel stupid and switch off. As a student these kinds of explanations are much appreciated and help me achieve my goal of learning more about the kernel.
*for those with a keen eye, you might notice that two Garrett and the Open Data talk overlapped. I watched one on video after the end of today’s conference.
That was all for today, looking very forward to tomorrow with a keynote by Matthew Garrett who will hopefully be beating his own record of “Best Talk so Far”. If you can’t make it but want to watch, tune in at http://timvideos.us/octagon at 9AM tomorrow.
Continuing from this post here, I’m going to continue blogging my experiences at linux.conf.au being held this year in Perth and UWA as a newcomer.
I actually managed to make the keynotes for this day, which was Kate Chapman’s talk on OpenStreetMap [ Video ]. The cool thing about linux.conf.au is that everything is being livestreamed at timvideos.us so when I was a little late, I could still stream audio in the car. Sweet! The keynote was really warm and fuzzy about open source doing great things around the world using OpenStreetMaps. It was a very accessible talk and has even inspired me to eventually try to contribute to OSM in some way, which is what a talk should do!
I spent most of the day at the Kernel UnConference. I know nothing about the Linux kernel, my general kernel knowledge spans only what was taught in the introductory university course however I, like a bunch of other students attending from Curtin, want to know more about what drives our favorite OSes. Most of what was being said went over my head and at times it seemed like the participants were speaking another language but I still think I managed to get something out of it. I think the idea of UnConferences is really great and definitely improves audience engagement though.
During lunch a bunch of us decided to go to Cafe Biscotti for lunch. This is relevant because it’s the first cafe in Perth to accept Bitcoin. Ironically, it’s a tiny cafe situated on the ground floor of Bankwest, which is part of the largest banking group (Commonwealth Bank) in Australia. Props to them for being the trendsetter in Perth (and for giving a 5% discount).
Returning to the conference, I took a break from kernel hackerspeak to see A Pseudo-Random Talk on Entropy by Jim Cheetham. As an introduction to the history and current techniques for random number generation this talk was pretty good, however it was a bit short (at only ten minutes) and I would have loved a more in depth look. Such is life with the miniconfs though.
Finally returning to the Kernel UnConference after afternoon tea, I got to see the highlight of my day as a panel of kernel developers answered questions from the audience including Linus Torvalds. It’s really cool to have one of your idols talking nonchalantly a few meters away from you, and is something I’ll definitely remember this conference by.
So today was the first day of my first Linux (or any) conference, linux.conf.au. Apparently blogging about these things is a thing (well at least according to Delan and Kye) so here we go.
Registrations were yesterday as well as the newcomer seminar about the conference by Rusty Russell (I hope this is his actual name.) who is a pretty cool guy and even bought all the newcomers a beer. We also got a whole bunch of merch which sadly didn’t include deodorant (see: Sysadmin miniconf) but was pretty decent for the $99 student ticket. We were also warned that the miniconfs over the first two days were independently run and could be a bit ‘rough’, as you’ll soon discover.
The first miniconf I attended was the Sysadmin miniconf where everyone was packed into the Woolnough lecture theater at UWA which is one of those ones where they try to cram as many seats into a room as possible. This combined with the hot Perth weather and a whole bunch of Linux enthusiasts did not make for a pleasant atmosphere to the point of where it was almost uncomfortable. To be honest, Sysadmin-ing bores me to tears and most of the talks either flew straight over my head or the speaker fumbled through slides, sometimes both. I feel like today’s miniconfs were slim pickings and I’m much more excited for tomorrow’s Open Programming conf. One gem in the rough for me was ‘Quality of Service’, a common misconception (10 mins) – Julien Goodwin[ Video ] one of the talks which I felt would be accessible to all of the audience and the speaker actually new his stuff, as someone who was tried to mess with QoS before studying CS, this ten minute talk made me understand it in a way that I previously hadn’t.
After getting sick of the Sysadmin conf I went to see the Open Government lightning talks and I wish I had gone their sooner. It was much less congested which really made it feel like more of a dialogue than the other talks. It was headed by Pia Waugh who seemed really passionate and knowledgeable about the subject (which you would hope she would be as the person in charge of data.gov.au, something I found out about today which I had never heard of and is amazing).
Lastly I went to a Crypto BoF (Birds of Feather, a concept which is new to me which I gather so far is a sort of informal discussion group). I’ve been a bit of a crypto enthusiast (Oh hey ASIO, I didn’t see you there) since doing some study on it at Curtin. One of the interesting topics which came up was the traditional security being at odds with usability and increasing usability may be a good thing regardless, because if enough people use PGP it may stop the people who use PGP from going on a list.
So that was linux.conf.au so far, it’s going to be an interesting week and I’m pumped for the proper talks to begin Wednesday-Friday. Hopefully I’ll be up in time for tomorrow’s keynote.
I got robbed recently in the middle of the night and they stole a phone and my keys. Obviously it was difficult to sleep that night knowing someone out there had access to my house, so remembering the phone automatically connected to my router when it was in range, I wrote this bash script to play The Final Countdown when it reconnected. It assumes you have VLC media player installed.
#!/bin/bash
PINGVAL=""
#PINGVAL will be null except when the phone pings back
while [ -z "$PINGVAL" ]; do
echo "pinging"
#change 10.1.1.5 to local ip address of phone, ping only outputs ttl when ping is successful
PINGVAL=$(ping 10.1.1.5 -c 1 | grep ttl)
done
#dah dah DAH DAH, dah dah DAH DAH DAH, dah dah DAH DAH...
cvlc Europe.mp3 --start-time=13 --volume 1024
I was procrastinating studying for my exams about a month ago, so I started playing Machinarium on the Playstation. It’s a cool little point and click adventure where you play a robot when needs to beat some other robots and get his girl-robot friend back. For those who don’t play adventure games, they tend to have some really really frustrating puzzles. Machinarium was no exception. One I got particularly stuck on in the greenhouse. It involved a 5×5 grid which you have to completely light up. The catch is that you picked a starting square, which lights up, and then you can only pick a direction after that. The squares then light up in that direction until they hit a lit square or the edge of the grid. If you are surrounded by lit squares/edges and the grid isn’t lit, you have to start again. here is a video of someone playing it:
Well, I would have rather died playing this game before looking up a walk-through (the worst possible thing one can do while playing adventure games, such a slippery slope), so I decided screw it, that looks like it could be fairly easily implemented in a recursive algorithm. So I spent the next while doing it. At least it was good exam revision, I guess. I did it in C because that is what my exam was in. If you have thought of a better more efficient way to solve this puzzle (which I’m sure there is considering I’ve been only studying CS for a year), please tell me! I haven’t studied path solving type algorithms so far, but I plan to. This program can also solve bigger than 5×5 grids, but I haven’t really tested that yet – I would be interested to see how far this shoddy brute-force attempt will actually go as it gets exponentially more difficult. The code outputs the instructions back to front and at the bottom says the square you need to start at.
/**
* mach.h
*/
#define GRIDSIZE 5
void solveStart(int **array);
int gridFull(int **array);
int** solve(int **array, int position);
void freeA(int **a);
void aCopy(int **a1, int **a2);
int** iterate(int** array, int xDir, int yDir, int pos, char* direction);
/**
* mach.c
**/#include <stdlib.h>
#include <stdio.h>
#include "mach.h"
int main()
{
/*declarations*/
int **array;
int ii, jj, input;
/*allocate starting array*/
array = (int**)calloc(GRIDSIZE, sizeof(int*));
for (ii = 0; ii < GRIDSIZE; ii++) /*allocate array*/
{
array[ii] = (int*)calloc(GRIDSIZE,sizeof(int));
}
/*input the dead squares (grid is labeled starting at 0 going to GRIDSIZE moving across horizontally*/
input = 0;
while(input != 1234)
{
/*input the coordinate*/
printf("\n input dead squres or enter 1234 to start solving: ");
scanf("%4d", &input);
/*if already dead, reverse, other wise make dead*/
if ( (input < GRIDSIZE*GRIDSIZE) && (input >= 0) )
if(array[input/GRIDSIZE][input%GRIDSIZE])
array[input/GRIDSIZE][input%GRIDSIZE] = 0;
else array[input/GRIDSIZE][input%GRIDSIZE] = 1;
/*if out of bounds and not the exit key, dispaly error*/
else if (input != 1234)
{
printf("\n value entered is out of bounds, check your value or change #DEFINE GRIDSIZE value\n");
}
/*print the grid as it stands*/
printf("\nCurrent grid (X for dead squares, O for live)\n");
for(ii = 0; ii < GRIDSIZE; ii++)
{
for (jj = 0; jj < GRIDSIZE; jj++)
{
if(array[ii][jj] == 0)
printf("0");
else printf("X");
}
printf("\n");
}
}
/*start the algorithm*/
solveStart(array);
/*clean up*/
freeA(array);
return 0;
}
void solveStart(int **array)
{
/*declarations*/
int ii, exit;
int **successArray, **tempArr;
/*allocate array*/
tempArr = (int**)calloc(GRIDSIZE, sizeof(int*));
for (ii = 0; ii < GRIDSIZE; ii++)
{
tempArr[ii] = (int*)calloc(GRIDSIZE, sizeof(int));
}
/*initialisations*/
exit = 0;
ii=0;
/*main loop*/
while (!exit)
{
/*can't start on dead squares*/
while( (ii < GRIDSIZE*GRIDSIZE) && array[ii/GRIDSIZE][ii%GRIDSIZE] == 1)
ii++;
if (ii < GRIDSIZE*GRIDSIZE)
{
/*make a copy of starting array to attempt to solve starting at location ii*/
aCopy(array, tempArr);
/*turn the start square dead*/
tempArr[ii/GRIDSIZE][ii%GRIDSIZE] = 1;
/*start the algorithm*/
successArray = solve(tempArr, ii);
/*print the starting square if the puzzle can be solved by starting on it*/
if( (exit = gridFull(successArray)) )
printf("%d\n", ii);
/*go to the next square*/
ii++;
/*set exit variable to true if you've run out of spots to try*/
}
else
{
exit = 1;
printf("\n Grid not solvable\n");
}
}
/*clean up*/
freeA(successArray);
}
int** iterate(int** array, int xDir, int yDir, int pos, char* direction)
{
/*declarations*/
int newPos, ii;
int** returnVal, **tempArr;
/*allocate temporary array to try the next iterations*/
tempArr = (int**)calloc(GRIDSIZE, sizeof(int*));
for (ii = 0; ii < GRIDSIZE; ii++) /*allocate array*/
{
tempArr[ii] = (int*)calloc(GRIDSIZE, sizeof(int));
}
returnVal = NULL;
/*copy the imported array into the temp array */
aCopy(array, tempArr);
/*next square we're hoping to conquer*/
newPos = pos+yDir*GRIDSIZE+xDir;
/*check to see if it is in the bounds, and that it is alive*/
if ( ( (newPos < GRIDSIZE*GRIDSIZE) && ( newPos >= 0 ) && (tempArr[newPos/GRIDSIZE][newPos%GRIDSIZE] == 0 ) ) && ( (xDir == 0) || ( (pos%GRIDSIZE-newPos%GRIDSIZE) == 1) || ( (pos%GRIDSIZE-newPos%GRIDSIZE) == -1) ) )
{
do
{
/*make the square dead and set the next square we hope to conquer*/
tempArr[newPos/GRIDSIZE][newPos%GRIDSIZE] = 1;
pos = newPos;
newPos = newPos+yDir*GRIDSIZE+xDir;
}
/*check to see if next square is in the bounds, and that it is alive, if not exit*/
while ( ( ( (newPos) < GRIDSIZE*GRIDSIZE ) && ( newPos >= 0 ) && ( tempArr[newPos/GRIDSIZE][newPos%GRIDSIZE] == 0 ) ) && ( (xDir == 0) || ( (pos%GRIDSIZE-newPos%GRIDSIZE) == 1) || ( (pos%GRIDSIZE-newPos%GRIDSIZE) == -1) ) );
/*find the next direction*/
tempArr = solve(tempArr,pos);
/*you should only get here if your grid is filled or if you've exhausted all posibilities for this node (and its children*/
if (gridFull(tempArr))
{
/*if you're here, the problem is solved and program is unwinding*/
printf("%s\n", direction);
freeA(array);
returnVal = tempArr;
}
else
freeA(tempArr);
}
else freeA(tempArr);
/*NULL if unsolved, contains the solved array if it isn't*/
return returnVal;
}
int** solve(int **array, int position)
{
int **tempArr;
/*base case: when the grid is full*/
if (gridFull(array))
tempArr = array;
else
{
/*try going in each possible direction*/
if ( (tempArr = iterate(array, 0, -1, position, "up") ) != NULL );
else if ( (tempArr = iterate(array, 0, 1, position, "down") ) != NULL );
else if ( (tempArr = iterate(array, 1, 0, position, "right") ) != NULL );
else if ( (tempArr = iterate(array, -1, 0, position, "left") ) != NULL );
else tempArr = array;
}
return tempArr;
}
/*helper function set the values in a2 to the values in a1*/
void aCopy(int **a1, int **a2)
{
int ii;
int jj;
for (ii = 0; ii < GRIDSIZE; ii++)
{
for(jj=0;jj<GRIDSIZE; jj++)
a2[ii][jj] = a1[ii][jj];
}
}
/*helper function to check if the grid is full (therefore complete), evil multi return to speed up false cases*/
int gridFull(int **array)
{
int ii;
int jj;
for (ii=0;ii<GRIDSIZE;ii++)
{
for(jj = 0; jj<GRIDSIZE;jj++)
{
if (array[ii][jj] == 0)
return 0;
}
}
return 1;
}
/*helper function to free a 2D malloc'd array of size [GRIDSIZE][GRIDSIZE]*/
void freeA(int **a)
{
int ii;
for (ii=0;ii<GRIDSIZE;ii++)
{
free(a[ii]);
}
free(a);
}
While making this little program, I wondered how many different solvable grids there were. So I decided to find out. It turns out the number of possible grids (but not necessarily solvable) you can get is equal to N = 2G2 where G is the height of the grid. I used this little piece of code as well as a slightly modified version of the above, shown here (I fixed up some memory errors too), piping the test program into the mach program to feed every possible grid for the specified gridsize and the program will tell me how many are solvable and save those to a file.
/**
* test.c
**
#include <stdlib.h>
#include <stdio.h>
#include "mach.h"
int plusOne(int *prev)
{
int ii;
ii = GRIDSIZE*GRIDSIZE-1;
while(prev[ii] == 1)
{
if (ii == 0)
return 0;
prev[ii] = 0;
ii--;
}
prev[ii] = 1;
return 1;
}
int main()
{
int ii;
int* array;
array = (int*)calloc(GRIDSIZE*GRIDSIZE, sizeof(int));
do
{
for(ii = 0; ii <GRIDSIZE*GRIDSIZE;ii++)
{
if (array[ii] == 1)
printf("%d\r", ii);
}
printf("1234");
}
while (plusOne(array));
free(array);
return 0;
}
It turns out for the a 2×2 grid there are 16 possible but only 14 13 are solvable
For 3×3, there are 512 possible but only 171 are solvable
For 4×4, there are 65,536 possible but only 3503 are solvable
For a 5×5, there are 33,554,432 possible, but my computer so far has only calculated 4.7 million. It will probably take a couple of hours from now. After a couple of computing hours, it has 118481 solvable.
6×6 will probably take a very long time on my PC, 68,719,476,736 possibilities – likely more than a year, so thats where this algorithm’s usefulness wears out. I wonder how far it could be optimized.
Here are links to the outputted logs of every possible solvable grid:
Update: Looking through the 2×2 log, I notice that a grid that looks like this was in there
XO
OX
when it is clearly not solvable. I think the problem here is with how I detected whether you were at the edge of the grid by seeing if the difference between the next and current position mod the grid size was one. This is clearly a problem in a 2×2 grid, therefore the algorithm is definitely not suited to it.