The Mystery Box Solver

Uther Party is a custom mini-game map for the game Warcraft 3. Our local university LAN club has a weird obsession with it. Basically there is a box with a number on it, and the game is to not be holding the box when it lands on 0. You pass the box to the person next to you by pressing either Q, which decrements it by 1 or W which decrements it by 2. If the box is on 1 and you decrement it by 2, it explodes before you pass it.

Uther Party 1

Uther Party

The game is pretty frustrating, so I decided to make a script to tell me the optimal choice to make. The script works by generating a minmax tree of all possible choices. Each leaf has an array the size of the amount of players, which has a 1 in the field if the player doesn’t die in that playthrough or a 0 if they do. It then goes up the tree with each level of the tree corresponding to a different players turn which attempts to maximise their index in the array. If the two choices have the same chance of death, it takes the average of both children’s chances.

The solver works pretty well, however it assumes that the players aren’t colluding and are only looking to maximise their own score which is a fair assumption, but may not be the case 100% of the time.

Mystery Box Solver

You can download the solver here (it has prompt, command and GUI modes). It only requires python, but optionally can use ete2 to print out a graph of the tree if you are really sadistic.

If you want to try it out, I’ve also made a module for the ComSSA IRC bot to facilitate a text based game of mystery box.

Scanning barcodes and QR Codes on Android with a broken back camera

So for reasons I won’t get into, I have a faulty main camera on my Samsung Galaxy S III.

It turns out Snapchat still allows me to take pictures and save them without crashing, due to it having front camera on by default. But I found myself needing a barcode scanner or QR Code scanner for 2-Factor authentication.

Screenshot_2014-07-07-18-35-18

Most barcode scanner apps, which automatically boot into scanning mode will just crash straight away. I spent quite a while trying poorly written bar code scanners (even the one Google Authenticator suggests) which just crash if the back camera is broken.

Eventually I found a solution. QuickMark Barcode Scanner doesn’t crash right away and allows you to switch to your front camera. It has a pretty decent UI, and they also have a pretty good Windows app for scanning barcodes on your screen. Good job on the developers for handling erroneous situations gracefully.

Exploring Different Styles of Pong Agent AI

Pong

Pong is a classic game, you could even say it is the classic game. I have been looking for things to help me learn Scratch recently, which is used to teach kids to code, as well as things to help me procrastinate exam study. So I decided to see if I could make the best damn Pong AI in Scratch that I could. This isn’t exactly simple, Scratch doesn’t really have “functions” like a regular programming language, so things get messy really quick. As Wikipedia says;

There is a strong contrast between the powerful multimedia functions and multi-threaded programming style and the rather limited scope of the Scratch programming language.

The Definition

So before we start, let’s define the world and its parameters. I have just come fresh out of an AI unit which did not have nearly enough coding, so I am itching to actually apply my knowledge. We will start off with a PEAS definition:

  • Performance: We want to minimize the number of balls which we miss, and maximize the number of balls which the opponent misses. AKA we want Max(Computer Score – Player Score).
  • Environment: We are playing in a 480×360 playing field. The top and bottom are walls where the ball will bounce off and when the ball hits the sides that player loses. When hitting a wall, the ball’s Y vector is reflected across the X axis. When hitting a players paddle, the paddles velocity is added to the Y velocity (assuming the Y velocity of the ball isn’t the maximum) and then the remainder speed is the X velocity (going in the reverse direction). The ball is always going at a constant speed. Lets have a look at the environment characteristics, The world is:
    • Fully Observable: All variables are available to the AI (Scratch hasn’t really caught on to Information Hiding principals)
    • Deterministic: The only thing the AI cannot predict is the movement of the Player Paddle, everything it can figure out (there is no randomness)
    • Episodic: After each paddle hit the AI can practically forget everything that just happened.
    • Dynamic: The Paddles and the Ball are all constantly moving.
    • Continuous: Because positions and velocities can go down to the decimal, the data is continuous, this isn’t a big deal though because it doesn’t actually affect the decisions much (you can just round).
    • Multiple Agent: There are two agents, the Player (a person) and the Computer (the AI). They are competitive agents, one agent succeeding means the other agent fails.
  • Actuators: We have two actuators in this game, the up accelerator which when on it increases velocity until it reaches the maximum and the down accelerator which decreases velocity until it reaches the negative maximum
  • Sensors: We can sense the X and Y coordinates of both paddles and the ball. We can also sense the Y velocity of both paddles and the X and Y velocity of the ball.

The Agents

I developed four different pong agents, each building on the lessons from the last one. I also updated and refined the world in which they interact. The first agent uses very hacky code such as bounce “when touching color red”. For a more complex AI to work well, we will need the game to be as deterministic as possible.

The Reflex Agents

Reflex agents are pretty simple. They read in sensor data and react to it. In our case my first agent will read in the position of the ball and if it is above, fire the up accelerator and if it is below fire the down accelerator.  This is a very naive approach. It works well when the Y velocity of the ball is less than the maximum Y velocity of the paddle, but once it goes over, the paddle can’t cope. Furthermore, the paddle can’t just change directions, it must first decelerate. This means a ball with a high Y velocity will whiz past the paddle before it can turn around.

We can definitely do better than that. One issue is that the paddle starts following the ball too early and goes too far up or down to get back. So in the second agent, the paddle returns to the centre after hitting the ball and then only following the ball when it passes a certain point. This doesn’t really help it that much, and the steep Y moves still get the better of it. It looks smarter though.

The Model Based Agent

So it seems a bit stupid to have the paddle start moving upwards if the ball ends up below it. The thing is, we can figure out where the ball ends up using data gained from our “sensors”. Now the agent can just calculate the position the ball will hit based on its vector and move into position. This is almost perfect. I had to change the world around to keep the challenge on the agent. Now when the ball hits a paddle, its speed increases. This means the eventually the paddle won’t be able to get to it in time if it is too far away. I also cleaned up quite a bit of the code, relying more on variables and less on hit boxes to make it more consistent.

See if you can beat the agent. It’s interesting that the big issue for the first two agents, balls with high Y velocity, are a walk in the park for this agent. What this agent really struggles with is balls with a high X velocity aimed at its corners.

The Utility Based Agent

So now the AI is pretty good, but there is one problem. Right now the agent only plays defense, which makes it rely on the Player making a mistake. As the cheesy movie actors say, the best defense is a good offense. In order to allow a good offense, we want to design a utility based agent.

Utility based agents have a happiness function, which evaluates how happy a particular action will make the agent. The best offensive moves are moves that get the ball the furthest away from the other player with the highest X velocity, hopefully making it impossible for the Player to reach the ball on time. Note that actually hitting the ball isn’t a part of the happiness function – any plan which involves the AI missing the ball is immediately thrown out. Happiness is based on increasing our chances of winning and if we miss the ball the chance of winning is zero.

There are only 21 different velocities a paddle can hit the ball at, so to plan a move, we just generate all the states (assuming the ball would be able to be hit), test to see which will make the agent the happiest and then make a plan for the agent to follow. If the plan reaches a dead end, it finds the next happiest state. If it can’t find a state where it reaches the ball, it is quite likely the ball is too far away and going too fast. Having the paddle sit there without a plan looks a bit weird to a Player, they want to see it try and fail! So if it doesn’t find a state, it goes to the default state, which is moving as fast as possible toward the point where the ball is predicted to land. The agent would think this is futile, but it must appease the silly humans. And who knows, maybe a rounding error could mean it actually just makes it.

This agent is the most polished and challenging by far. I have also cleaned up the code some more, while simultaneously making it super dirty. Give it a go!

So that’s the best AI we have so far. If anyone has any comments on improving it (or if you found any bugs), or if you would like me to explain in more detail the algorithms used, leave a comment! When I have time, I plan on making it semi modular, so I can pit AIs against each other.

If you want to view that code for these projects, check them out on Scratch [1] [2] [3] [4].

Sigh. If only we had this kind of programming integrated in our Artificial Machine Intelligence unit.

We Are Doing Computing Assessment Wrong.

Understanding is not cued knowledge: performance is never the sum of drills; problems are not exercises; mastery is not achieved by the unthinking application of algorithms. – Wiggins, 1993

Now when I say We, I really mean Curtin University. I’m going to assume other universities teach in a similar way. To start off, let us have a look at the break down of some second and third year Computer Science (did I say Computer Science? Surely I mean Bachelor of Science (Science) majoring in Computing!) units at Curtin.

In this table, Tests and Exams are things where you have to sit for a certain amount of time with nothing but pen and paper and complete questions. Programming and Written Assessments include things like projects, programming tasks and reports, usually these things are mixed together as an ‘Assignment’.

UnitNo. Tests% TestsNo. Prog/Writ% Prog/Writ
DAA300390%110%
AMI300385%115%
OS200285%115%
SE200380%120%
CG200265%135%
FCC220150%350%


Now if I was asked to rate these units on a combination of how effective I learned the unit outcomes, overall enjoyment and how much I remember the concepts to this day I would probably produce this list.

No.Unit% Tests
1FCC22050%
2CG20065%
3SE20080%
4AMI30085%
5OS20085%
6DAA30090%

Now I know correlation doesn’t mean causation, so I’ll state it clearly.
I hate tests.
There are so many reasons why tests suck in computing that I just have to make a list.

1. It Is a Bad Test of Understanding

When designing computer programs, we usually judge the requirements on inputs and outputs. So if I input X into your program or function, it should output Y. We write test cases around this principal in order to see if the program does what it is supposed to. Unfortunately, people are not programs. When given a programming assignment, if I get given the test case, I can easily produce a program that outputs the solution in one line after calculating the solution manually and telling the program to print the solution. Unfortunately I would get zero marks after my tutor inspected the code.

So if simple input/outputs isn’t enough to test a student’s code, why should it be enough to test their knowledge of the concept. A graduate programmer will seldom be far from a computer able to produce any of the information that has been rote learned at university, so why do we focus so much on this technique?

2. It Does Not Help Students Learn

I can now hear the voices of all my lecturers at once:

“Tests aren’t for learning Jason! They’re for assessing!”

They sure are, Unnamed Lecturer. That’s why every unit ever has somewhere around half of the assessment marks as a formal, summative exam at the end of the semester, a time when everyone stays up late memorising every useless fact in their book in the hopes that they will recall an obscure fact which gives them a higher mark than their peers. Maybe, just maybe this will get them a higher grade so that Company Inc. will think they will be a better employee. This is so disconnected from reality I cannot fathom the reasons why we would want more of this type of assessment.

When studying for a test, what do students do? Normally, we would go through lecture slides, tutorials and past tests/examinations to see what topics are generally tested. We then repeatedly do practice questions (in the case of algorithms or calculations) or try to commit things to recall memory by writing notes and repetition. This is a good strategy to pass tests but this is not a good way to learn.

To contrast, a well structured assignment task will encourage students to go and research concepts and implement them (in the case of programming tasks) or write about them (in the case of reports/essays). While you can easily learn via rote in order to pass tests, implementing an algorithm requires a deeper understanding. The marker can then confirm this understanding by reading your implementation and comments around it. There is also a bonus, programming and to a smaller degree essay writing, has the rote learning of concepts built in. With both programming and writing, you inherently read and reread concepts over and over as you debug and edit your work.

3. There Is No Portfolio

When you complete a programming task, your result automatically goes into a portfolio of code you can publish as an example of your skills when joining the workforce. This is a bonus for people who are vocation oriented (read: at university just to get a job). While high marks always look attractive on a resume, employers will also be looking for experience in certain languages and fields of development. Saying “I did good in a test” is much less impressive than “I implemented an encryption protocol in C in two weeks”.

4. There Are No Goals

Goals are important. There are entire industries based around teaching people to set goals and achieve them. The current buzz around education and marketing (both which are weirdly linked) is gamification: the use of things like scores, points and achievements to help motivate people. Gamification, when used correctly, can increase user engagement and retention. A programming assessment, probably unknowingly, borrows many concepts from gamafication. Usually they will consist of a few different features worth a certain amount of marks (or points, if you see where I’m headed here). When a student completes a feature, they can test it using test input and once they get the correct output, they have “achieved” that feature.

The difference in these programming assessments which makes them almost automatically gamified compared to assignments in other degrees is the ability for instant gratification. A programmer knows when they have reached a milestone, because they can test each part compartmentally. Many Computer Science students tend to enjoy playing video games, so much so that a large chunk of the events our local CS club run are LAN’s. I don’t think this is a coincidence. Our teachers should acknowledge this connection and start designing assessment tasks with it in mind.

You Just Need to Study Harder, You’re an Adult! Formulate Your Own Learning Goals!

This is the response I expect to get from educators reading this. This is what I would expect they would think to themselves when they read my unit feedback (almost every one has been “Less tests, more assignments”). I reject this assumption completely.

University is not an accreditation factory, we don’t attend for the tests, we attend to learn. In this day and age of Khan Academy, Code Academy, Coursera and with all the fancy university offering free MOOCs (Massively Open Online Courses), the difference between learning online and attending a university has narrowed. Students expect that the university will not only make learning content available and test their summative knowledge in a 60 minute session a few times per semester. Students expect that their learning institution will assist them in setting achievable goals which are formative across the unit.

A good way to do that is more practical, programming assessments with theoretical reports/analysis. Giving students feedback through both automatic methods such as providing sample test input/output as well as feedback from instructors which goes beyond “this did not fit the marking guide, this was the answer” will improve student engagement, retention and results.

Lastly, the one issue with encouraging more practical assignments is the issue of cheating and ghost writing. This is the one thing exams are good for, and why they will always have their place at the end of a unit as the summative assessment for the entire semester.

Linux.conf.au: Day 5 – The end

Today was the last day of linux.conf.au :(. Arguably some of my favorite talks were today.

This is a last continuation of this post here about my first Linux.conf.au, here.

The Keynote today was Deploying software updates to ArduSat in orbit by Jonathan Oxer [ Video ]. Basically this was a project which got it’s funding from kickstarter which allows low-cost experiments to occur on arduinos on cube satellites in Lower Earth Orbit. This really was a the future is now type talk. Never before would you have thought regular people would have a chance to do anything with space other than look at it.

Raspberry Pi Hacks: Building Great Pi Projects by Ruth Suehle and Tom Callaway [ Video ] was a pretty light talk about some different projects the speakers and others had achieved with the raspberry pi. It’s actually convinced me to give the pi another shot. I bought one when they first came out and used one for a bit as a torrent server however I found it kinda sucked with stability but I think part of that now has to do with my power adapter (apparently they have to be 5v 1A regulated) and the fact it was the original model (apparently things have been fixed in the newer ones). They also had written a book on the pi for sale here (the discount code AUTHD gives you 40% off!)

The Rust language: memory, ownership and lifetimes by Nicholas Matsakis [ Video ] was an introduction to the new Rust language being developed by Mozilla. Now I’m not an expert on rust, but from the talk my impression is that it is C++ with some extra checks to avoid memory management issues. Basically you have an owner (like a variable/pointer) of a piece of memory that the owner can lend memory to others or transfer ownership. This prevents aliasing and means there doesn’t need to be garbage collection which worries about reference counting and other things. Obviously there are some other additions, but this is the one that was mainly emphasised. Rust sounds like an interesting language, I hate memory issues as much as the next guy. However, one issue I had with this talk was that programming languages were evaluated on this zero sum balancing act between safety and control. C++ gives you lots of control but sucks at safety. Haskell is very safe, you have basically no side effects but you don’t have much control or freedom which is why it isn’t used much. Python is pitched as being somewhere in the middle. Rust however is pitched at being simultaneously giving lots of control and lots of safety, which I feel like was extending the truth a bit. Rust gives safety through the ownership system which forbids dangerous things like aliasing and shared memory. Rust offers C++ like freedom by having unsafe blocks, which do allow these things. Clearly an unsafe block is not safe. So can a language offer high control and high safety at the same time? I’m unconvinced, but I haven’t used rust, so maybe somebody could comment on how it makes their programs more safe. It seems to me like Rust mostly just enforces a commenting policy of marking areas of code where memory management is messy. I do believe Rust has the potential, because of these rules, to improve code readability and debugging, but only compared to poorly documented code.

Pettycoin: Losing Tiny Amounts of Bitcoin At Scale! by Rusty Russell [ Video ] – I was looking forward to this talk. I’m a fan of the Bitcoin protocol so generally new ideas excite me, plus Rusty Russell is super nice to newcomers which is always great! Rusty also admitted the idea and implementation is in it’s infancy. Firstly, let me say I like that he is thinking out of the box and I am so glad there are people out there making things that aren’t more alt-coins. Stop making alt-coins people! Rusty talks about some good ideas. Sharding the network is pretty smart, so is sending out double spend alerts and shirking transaction size. However it is fundamentally flawed. The reason is Gateways. Bitcoin is supposed to be decentralized and gateways are centralised. It seems like the whole protocol relies on gateways and because of this I don’t see any advantage this would have over sending your bitcoins to a dedicated, centralised micro-payment service (like paypal lite or something). I would even argue a dedicated micro-payment service would be more secure as there is no risk of double spends, only traditional hacking.

Finally there were lighting talks at the end. Mostly used as an opportunity for people to spruke their open source project or group, these are usually hit and miss. I’m excited I discovered DLect after mostly giving up on my blackboard scraper project, it turns out somebody from University of Queensland has had the same idea but gotten a lot further. Hopefully I can get it working with Curtin. There were also a couple of talks on Bitcoin. Lightning talks are free as in free speech, so they’re sometimes not the greatest. Here is my quick guide if you’re interested in spreading the word of bitcoin.

Don’t talk about it like it’s an investment or stock
Don’t talk about it like it’s going to make money (saving money (from fees) is OK)

Don’t be misleading by refering to thousands of % growth
Don’t talk about how it’s a “store of value

Do talk about the lack of fees for transferring
Do talk about the decentralised nature of the protocol
Do talk about the safeguards against hacking (eg Proof of Work)

There tend to be two views on bitcoin, one is that it is a ponzi scheme and one that it is the future of money transfer. I think it’s both. There is no doubt that a lot of ‘early adopters’ will become very rich (or have done already) if bitcoin becomes mainstream, and many of them have become mouthpieces for bitcoin, but I think this does not detract from the theoretical qualities of bitcoin and that the protocol will still exist even when all the scheming has died down and the ponzi schemers have lost all their money.

So after that slightly divergent rant, LCA2014 is now at a close. LCA2015 was announced to be held in Auckland so maybe I’ll see everybody there next year! I’ve had a great time and I recommend any other student interested in Linux to take advantage of the cheap student entry.

Linux.conf.au: Day 4

It’s had now been 4 days since the conference began (I’m writing this in the past perspective because I’ve been lazy.). It’s only the second day of the “real” conference, where all the long, meat and bones talks were held.

This is a continuation of this post here about my first linux.conf.au, here.

First up we had Matthew Garrett deliver a great keynote speech, which was expected. I’d watched his talk the day before and I definitely wasn’t going to miss this. Once again he totally nailed it. In this talk he explored the controversial UEFI Secure Boot. You see, secure boot is a good thing, because it stops a computer whose boot has been compromised by a virus. The problem the free software community has with it is that there isn’t provisions for intentionally changing the boot image, (for instance when installing Linux). Other scenarios in different but comparable fields include Android phones who (sometimes) allow you change the boot image, but only if you turn off boot verification completely thus making a modified OS more vulnerable to attacks. Ideally there would be a way to self sign packages allowing them to boot without turning off secure boot. Here’s looking at you, hardware manufacturers. Watch to video here.

Programming Diversity by Ashe Dryden [ Video ] was not just about diversity with regards to women but other unrepresented people in the programming world too. This presentation was really well done, the slides were great and everything was referenced. This is my favorite quote from the talk and it really resonated with me because I’ve had this argument with people in the tech field way too many times that it is rather depressing so whenever I see someone championing diversity in the field I have a glisten of hope.

Then you might be thinking well “maybe women just aren’t biologically predisposed to programming”. You know maybe there’s just something in their brains that just makes them not good at programming, not good at writing algorithms or figuring this stuff out. There’s got to be, there has to be some kind of science to explain why we have this problem. But we actually do have science… in the exact opposite direction. There is no physical or biological difference that shows that anybody, regardless of their race or gender, is better at being a programmer. There is none. And a lot of the things that we talk about when we talk about things like evolutionary biology, which is what this question kind of hinges on, is the fact that you assume that there is some kind of addition that you get to being person for being a programmer. As if you can outrun a cheetah because you are a good programmer.

When I told people I was attending the talk, a surprising amount of people went down the whole evolutionary biology track almost as a knee jerk. I’d like to point any of my peers who feel like they agree with this line of thought to this post by a certain, Gentoo using lecturer who navigates his desktop solely via keyboard and writes all unit material in LaTeX.

From Kookaburra to the Cloud: where to now for copyright in Australia by Ben Powell [ Video ] was a very informative talk on the efforts to introduce a US style fair use to replace the current fair dealing. In summary (although I do recommend you watch the talk) fair dealing lists certain situations in which copyrighted material can be use without getting permission from the copyright holder, while fair dealing lists certain (usually four) factors which are weighed up against each other to decide whether or not the use of unlicensed copyrighted material was ‘fair’. The difference being that fair use is dealt with on a case by case basis which tends to keep up with the times more than fair dealing. For instance, one ludicrous example is that we are allowed to copy video from VHS to other devices but not DVDs. This kind of copyright law could allow court proceedings to reflect common sense instead of shoehorning outdated laws into current situations.

Below The Line: Fixing The Voting Process With Technology by Benno Rice [ Video ] followed Benno’s story of developing a web-app to try and combat the somewhat dirty tricks that candidates play in senate elections to try and get in by allowing people to preplan their below the line vote. I commend Benno for keeping his apolitical hat on throughout the entire process, thus strengthening democracy rather than strengthening a particular party. These kinds of talks frustrate me because while Benno has made a difference, I wish he didn’t have too. Clearly if somebody needs to make an unofficial web app in order to make our elections fair something is wrong and needs to be changed stat.

Disaster Recovery Lessons I Hoped I’d Never Have to Learn by Bdale Garbee [ Video ] made me and I think everybody else who saw it immediately evaluate our current backup procedure and almost certainly conclude it needs to be better. Seriously, watch it.

Finally I ended up at Introduction to Go by Mark Smith [ Video ] was an introduction to Google’s recently developed programming language Go. Having never used go, I’m pretty amazed at the fact that in the tutorial we made a multi-threaded chat program in about 45 lines. Go is a powerful language and as someone who hates locks/mutex management, I think I’m going to try learn some more. Another thing I like about Go is that the executable are completely self contained. Not having a interpreter or JVM needed is a real bonus when you might be distributing to someone who might not have them installed.

My linux.conf.au adventure is concluded here.

Linux.conf.au: Day 3

Wow. I now realise the last two days were just a warmup. The Conference really came into it’s own today. With talks going about twice as long as the previous two days, we really got some in-depth information from some top notch speakers.

This is a continuation of this post here about my first Linux.conf.au, here.

It started off with the lightning talks in lieu of a keynote speaker. I was a bit disappointed that there weren’t more of them (they finished well before scheduled) but the ones that were there were very well spoken about things ranging from Raising Geek Girls to Password Storage best-practices.

Next up I attended the Python 3: Making the Leap! by Tim Leslie tutorial. This was a great tutorial where Tim went through the nuances of the differences between the two versions of python and how the -3 flag and the 2to3.py tool can be used to convert a 2.xx script to python 3. I found this particularly helpful as someone who recent started having a go at python with my (disclaimer: linking to horrible, horrible code) blackboard scraper I recently wrote. For some reason despite all my dependencies working with python 3 I wrote it in 2.7. Learning to write pretty much depreciated code was probably not the most intelligent thing to do, but at least now I have a good explanation on what is different when writing code for 3 in the future.

Next up was Building Effective Alliances around the Trans-Pacific Partnership Agreement by Sky Croeser [ Video ]. The great thing about this talk is that it went beyond saying why the TPPA is bad, which is all we ever seem to hear from tech news sites, and actually delved into strategies of how to effectively combat. The strategies were diverse with no possibility left unturned, from writing letters to politicians to 1999 Seattle WTO Protest tactics – fun for the whole family! I thought the way she approached the topic was terrific, as a person who is not in the slightest bit interested in radical activism, but a strong interest in political issues such as this, it was nice to be acknowledged as somebody who can still contribute. If you’re at all worried about TPPA, I would strongly recommend watching the talk when it’s uploaded on Monday.

Bringing MoreWomen to Free and Open Source Software by Karen Sandler [ Video ] was a really eye opening talk on the strategies used at the GNOME Foundation the bring their amount of female contributors up to par with the rest of the Computing sector (3% compared to 18%, still a woefully small number). The lack of women in computing is quite a serious problem which I think is overlooked (and sometimes even perpetuated) by many computer scientists and programmers, and perhaps society in general. I’m fairly certain (and very hopeful) this is a changing statistic I’ve met many people who are committed to the issue.

Opening up government data by Pia Waugh [ Video ] continued from Monday’s open government miniconf about making government datasets available freely to the public. As I mentioned on Monday, Pia Waugh is definitely someone who is “on our side” in the government sector relating to open data and is in charge of the great data.gov.au website where you can request and vote for certain datasets to be release. She talked about certain issues related to open government data including ways to automate data collection and uploading (which would always provide up-to-date-data) and issues with the format in which data is provided. I love playing to datasets so this talk was very interesting to me, I would recommend giving it a watch when it’s online.

Reverse engineering vendor firmware drivers for little fun and no profit by Matthew Garrett [ Video ]* would have to be the best talk of the day (and possibly the week). In this talk we follow the protagonist Garrett as he embarks on a journey to reverse engineer a vendor tool for modifying servers which doesn’t quite go as expected. I won’t go any further because you have to watch it to really appreciate. The thing I really like about Garrett’s talk is not only the sprinkles of humor paced throughout, but the fact that whenever he says anything that isn’t basic computer/Linux knowledge, he stops and gives a short explanation of what that is. One mistake I feel like many speakers have is that they assume every attendee is as knowledgeable as they are in the domain they are speaking about when in reality there are people of all different types of skillsets. By not stopping to explain things you run the risk of having half your audience feel stupid and switch off. As a student these kinds of explanations are much appreciated and help me achieve my goal of learning more about the kernel.

*for those with a keen eye, you might notice that two Garrett and the Open Data talk overlapped. I watched one on video after the end of today’s conference.

That was all for today, looking very forward to tomorrow with a keynote by Matthew Garrett who will hopefully be beating his own record of “Best Talk so Far”. If you can’t make it but want to watch, tune in at http://timvideos.us/octagon at 9AM tomorrow.

My linux.conf.au adventure is continued here.

Linux.conf.au: Day 2

Continuing from this post here, I’m going to continue blogging my experiences at linux.conf.au being held this year in Perth and UWA as a newcomer.

I actually managed to make the keynotes for this day, which was Kate Chapman’s talk on OpenStreetMap [ Video ]. The cool thing about linux.conf.au is that everything is being livestreamed at timvideos.us so when I was a little late, I could still stream audio in the car. Sweet! The keynote was really warm and fuzzy about open source doing great things around the world using OpenStreetMaps. It was a very accessible talk and has even inspired me to eventually try to contribute to OSM in some way, which is what a talk should do!

I spent most of the day at the Kernel UnConference. I know nothing about the Linux kernel, my general kernel knowledge spans only what was taught in the introductory university course however I, like a bunch of other students attending from Curtin, want to know more about what drives our favorite OSes. Most of what was being said went over my head and at times it seemed like the participants were speaking another language but I still think I managed to get something out of it. I think the idea of UnConferences is really great and definitely improves audience engagement though.

During lunch a bunch of us decided to go to Cafe Biscotti for lunch. This is relevant because it’s the first cafe in Perth to accept Bitcoin. Ironically, it’s a tiny cafe situated on the ground floor of Bankwest, which is part of the largest banking group (Commonwealth Bank) in Australia. Props to them for being the trendsetter in Perth (and for giving a 5% discount).

Taken at Cafe Biscotti, 300 Murray St, Perth
Taken at Cafe Biscotti, 300 Murray St, Perth

Returning to the conference, I took a break from kernel hackerspeak to see A Pseudo-Random Talk on Entropy by Jim Cheetham. As an introduction to the history and current techniques for random number generation this talk was pretty good, however it was a bit short (at only ten minutes) and I would have loved a more in depth look. Such is life with the miniconfs though.

Finally returning to the Kernel UnConference after afternoon tea, I got to see the highlight of my day as a panel of kernel developers answered questions from the audience including Linus Torvalds. It’s really cool to have one of your idols talking nonchalantly a few meters away from you, and is something I’ll definitely remember this conference by.

Credit - Andrew Cooks
Credit – Andrew Cooks

My linux.conf.au adventure is continued here.

Linux.conf.au: Day 1

So today was the first day of my first Linux (or any) conference, linux.conf.au. Apparently blogging about these things is a thing (well at least according to Delan and Kye) so here we go.

Registrations were yesterday as well as the newcomer seminar about the conference by Rusty Russell (I hope this is his actual name.) who is a pretty cool guy and even bought all the newcomers a beer. We also got a whole bunch of merch which sadly didn’t include deodorant (see: Sysadmin miniconf) but was pretty decent for the $99 student ticket. We were also warned that the miniconfs over the first two days were independently run and could be a bit ‘rough’, as you’ll soon discover.

The first miniconf I attended was the Sysadmin miniconf where everyone was packed into the Woolnough lecture theater at UWA which is one of those ones where they try to cram as many seats into a room as possible. This combined with the hot Perth weather and a whole bunch of Linux enthusiasts did not make for a pleasant atmosphere to the point of where it was almost uncomfortable. To be honest, Sysadmin-ing bores me to tears and most of the talks either flew straight over my head or the speaker fumbled through slides, sometimes both. I feel like today’s miniconfs were slim pickings and I’m much more excited for tomorrow’s Open Programming conf. One gem in the rough for me was  ‘Quality of Service’, a common misconception (10 mins) – Julien Goodwin [ Video ] one of the talks which I felt would be accessible to all of the audience and the speaker actually new his stuff, as someone who was tried to mess with QoS before studying CS, this ten minute talk made me understand it in a way that I previously hadn’t.

After getting sick of the Sysadmin conf I went to see the Open Government lightning talks and I wish I had gone their sooner. It was much less congested which really made it feel like more of  a dialogue than the other talks. It was headed by Pia Waugh who seemed really passionate and knowledgeable about the subject (which you would hope she would be as the person in charge of data.gov.au, something I found out about today which I had never heard of and is amazing).

Lastly I went to a Crypto BoF (Birds of Feather, a concept which is new to me which I gather so far is a sort of informal discussion group). I’ve been a bit of a crypto enthusiast (Oh hey ASIO, I didn’t see you there) since doing some study on it at Curtin. One of the interesting topics which came up was the traditional security being at odds with usability and increasing usability may be a good thing regardless, because if enough people use PGP it may stop the people who use PGP from going on a list.

So that was linux.conf.au so far, it’s going to be an interesting week and I’m pumped for the proper talks to begin Wednesday-Friday. Hopefully I’ll be up in time for tomorrow’s keynote.

My linux.conf.au adventure is continued here.