Visual Studio… Why?

This post is just a rant venting frustration with Visual Studio. Feel free to ignore.

Last year I switched employment from a somewhat .NET shop to a predominantly python shop with a bit of a node sprinkled in where it won’t be noticed. As such the only times I need to use Visual Studio now are when I’m doing work for my brother, or whenever I dust off old projects. That is probably the absolute worst way to use Visual Studio.

So, I’m not entirely sure how I got into this situation, but my main ‘pack up and go’ laptop has a single version of Visual Studio on it- 2015 Community Edition (I’m really unclear on this as it should have professional edition). And I use it once every few months.

So. I go to a job site. In the middle of nowhere. I open up a project in Visual Studio, what do I get?

A prompt.

A prompt asking me to sign in to my gods damned useless Microsoft Account because it wants me to sign in every 90 days.

No internet access? Don’t want to sign in? No development for you. Visual Studio shuts down. All for what? So Microsoft can advertise their hosted TFS service that I wouldn’t touch with a 10 foot pole? So they can gather anonymous usage data that will help them… do something?

I will grant, they’re providing a (very large) tool for free and are more then welcome to impose any terms they’d like. I just wish they’d chosen terms that weren’t making me reconsider my off hours tech stack.

Unrelated but equally annoying, I had ReSharper on this VS2015 release. Somehow the two got into a really bad conflict. Would get IntelliSense from both tools, neither would work, and various key strokes would be ignored in favor of closing the useless IntelliSense pop ups. Coding when key strokes are randomly swallowed is painful.

Further annoyance: package management. I come back to my Tachyon Model Viewer project after (too long) a hiatus, and I can’t build the damn thing. The packages the project depends on are listed as installed, but for whatever reason they’re not referenced. And unless my solution of browsing to the installation location and manually choosing each assembly reference is the actual intended solution, there’s no obvious way of referencing them.


How much of this is my fault? Almost certainly all of it. I’m sure I screwed up my installation two years ago by, say, removing an older version of VS or something. Don’t know why I would have done that, but I could see it happening. And the package management could just be I’m using it in the exact wrong way. But, I don’t really care. Same way I’m sure there’s a VS2017 or a VSCode that I should be using if I want to be up to date, which probably does improve package management. But, as I said, I don’t really care- I’m on the internet, mildly annoyed, and basically talking to myself. I hate being stuck in the Javascript ecosystem using 30 different tools to build a web page, but I hate the Microsoft ecosystem more, where you have 1 tool that only usually works.

Oh well. To the Rust Rabbit Hole!

Aside. You may have noticed I don’t keep this blog active. Realistically that’s not going to change. This is a ‘when I feel like it’ thing, and sadly that doesn’t happen as often as I’d like. But, I do have a few things on my To Do list. One odd thing I’d like to figure out is getting this site and a few others into a docker container. I’d like to get to the point where I’m not spinning up new EC2 instances to run new side websites. Nothing major in the scheme of things, but it is some house keeping I’d like to do. Especially if it means I can continue ignoring actual house keeping.

Nicking assets from Tachyon: The Fringe, Part 2: The PFF Archive

The first file we’ll look at on our foray into pixel piracy is the PFF Archive. This is an archive file containing all of the game assets for Tachyon. There’s just the one, Tachyon.pff, and it’s nearly 400MB in size (no compression, but when extracted the contents usually take ~2GB on disk, due to a lot of small files). (This was back in the day when 2GB was a lot). This file format has been used by several NovaLogic games, most prominently the Delta Force series. Later versions of the format seem to support compression, but the format we’re working with is basically straight storage.

Technically the PFF format isn’t vital for our model extraction project- extractors have been around for the past 20 years, and we could just as easily work off an already extracted archive. However, that makes moving between computers a bit annoying, as you then need to find the extractor, extract the directory, point the app at the directory. Too much work, easier to point the app at a single file (especially as we aren’t looking to modification of underlying files). This also serves as a starting point for the blog series, to play with formatting and such before the content is important.

Thankfully at least one of the PFF extractors (from Devil’s Claw) also came with source code; so no reverse engineering required, just a minor port. Thank you Devil’s Claw.

On with the show.



General format of the file is a Header, a set of File Entries, a Footer, and then a massive blob of data.



Size Type Name Value Comments
4 int Header Size 20
4 char[4] Version String PFF3 PFF3, PFF4, etc
4 int File Count 8134 Number of File Entries
4 int File Segment Size 36 Size of Individual record, 32 to 40 bytes
4 int File List Offset 0x16C6A7F1 Start of File Entry Array, near end of file in this case

One interesting bit about the header is that the version string is lies (according to the old code). File Segment Size varies by PFF version, from 32 bytes for “V2”, 36 for V3, 40 for V4. But, “V2” and V3 have the same version string: PFF3.

File Segment Entry

Size Type Name Comments
4 int Deleted 0 for “not deleted”
4 int File Position Index into PFF where file content starts
4 int File Length
4 int Creation Date Guessing unix like timestamp; all around 2000 if true
16 char* File Name Null terminated string
4 int Modified Date Optional based on Segment Size, present in our data set, and has some weird values for timestamps (2020s+)
4 int Compression Level Optional based on Segment Size


Size Type Name Comments
4 int System IP Who packed the file, a 192.168.*.* address.
4 int Reserved/Unknown
4 char[] KING tag. Value=KING… Magic value, at a guess?

Appears after all of the File Entries. At a guess the KING tag is intended as a magic value to check that the file is otherwise valid.


It’s a straightforward archive format, with some support for extensions built in. It’s not bad, and kind of elegant for the goal of ‘single file archive’. It’s a bit wasteful though- what I usually see, when size matters, is the entries will have a bit field, with some fields defined, and others reserved for future use. IE deleted and compression level don’t need 8 bytes all to themselves. Though I’ll grant, for this use you’re talking about 8 bytes across 8000 entries- 64KB of semi-wasted space in a 400MB file.

Source code for the C# reader can be found here, and the Source for the Devils Claw extractor is here.

Other Notes: I wrote this a long time ago and never got around to publishing it because I wanted to get formatting for the tables right. Giving up on that for now, and just publishing to get the process started again. This’ll be a bigger problem in some of the next file formats, unfortunately, but we’ll see what happens. I also may use this as an opportunity to learn Rust, as Visual Studio is getting on my nerves.

Migration to AWS

Took me long enough. In 2009 this blog was originally hosted with a shared hosting provider. In 2014 (maybe) I moved it to a linux server on Azure for the purposes of play (and to remove reliance on hosting provider). After a slight employment mixup I decided to move from Azure (registered with my FS credentials), to AWS (registered with personal creds).

In the process of doing that I’ve turned on HTTPS via letsencrypt, switched from a “magic apache config copied from the internets” to an nginx based server with php-fpm doing the hosting, updated WordPress (though not yet the styling), and made a few minor DNS improvements. This took months to do. MONTHS! Months of “I should muck about with the server” on the back of my mind. But, it’s finished now, and that’s what’s important.

In celebration of new found listlessness, I’m re-publishing some of my older stuff. It’s a mix of contest related problem solving, very early electronics projects, and some misc rambling that at one point I thought didn’t really belong here, but, ehh.

Nicking assets from Tachyon: The Fringe, Part 1: Why and What


Many of the projects I’d like to do in my non-existent off time involve game development. I want to create an RTS, or a Fighter sim, or similar, but I inevitably run into a problem that’s always been a stumbling block: All these projects at some point call for shiny 3D assets. And I’m a programmer, not an artist. How can I get 3D assets to start development with?

I could look for free assets online, but I always find that kind of hit and miss. You have to spend hours upon hours to find a few good models, all in disparate formats that then need markup and corrections. It’s a mess that zaps my desire to live and my desire to do development.

I could pay someone money to generate models and textures for these games that likely won’t get past the weekend project phase. This still has the problem that I’m not an artist, so the requests would come down to “make me a cool looking spaceship”. That’s destined for disappointment.

OR. I could nick assets I already like from a game.  And not just any game, but the best game ever: Tachyon: The Fringe.

Disclaimer: In case you can’t tell, these are soley for personal non-commercial projects. It’s me trying to ease the path to getting something interesting on screen so that I can keep momentum up.

Starting Info

Tachyon, the best game ever, is a Space Combat Simulator released in early 2000. It’s actually still available today via Steam, though I’m sure no one at Nova Logic has touched the source code in the past 12 years. There’s been one notable project to try and modernize it, the fan made FringeSpace total conversion for Free Space 2. As of this writing that’s still ongoing (and I think I read they’re in their 6th or 7th year). One of their first projects was extraction of model data, but for whatever reason they had a lot of issues with that, and instead went the route of re-creating the assets by hand.

The original attempt at reverse engineering the model formats is semi-documented here (see Development Log, mid way through, apparently a copy of a LiveJournal of a fellow named Stuart Stanfield from 2002). Without this as a starting point I would not have gotten as far as I have. This is my first attempt at reverse engineering a binary file format, and that dev log is a good introduction for getting into the right mindset.

Tachyon stores assets in a single ~400MB pff archive. The PFF format is something common across NovaLogic games, shared by TTF and (the early) Delta Forces. It’s effectively a tar file- contains a set of records and the unencoded/uncompressed file data in a single archive. (apparently later versions of PFF support compression of data, but that’s not something we need to worry about).

The models in the PFF archive are stored as PAKs (for small things / single ships) and OCFs (for stations, capital ships, other). The PAKs contain model data, image data, and minor rigging information. The OCFs combine a set of PAKs in a tree format, basically like a standard hierarchical skeleton.

What to Expect

The next part in the series will be looking at the individual file formats. Really most of what I’m looking to do with this series is document the format and process so anyone else can do the same. I hate it when information is lost because it went unshared, especially in this day and age- there are quite a few tools out there, people have clearly spent time on this, but very little source code makes it out into the wild.

My end goal is to build a model viewer/exporter, to look at all the shiny PAK and OCF assets and export them in a more friendly format. This’ll be in C# as I prefer it for building tooling that calls for a GUI. I’ll post the end result on Github for all to consume, and hopefully not be C&D’d.

A lot of the work is already done (I’ve already built individual PAK and OCF viewers, but they’re a bit rough around the edges, so I’m redoing bits of the UI) (and it’s not yet in GitHub/public), so this is mostly a retrospective, though there are still a lot of unknowns I’m hoping to solve.


Code is available on GitHub at: . The code includes everything I’ve done up until now, so, it’s semi-complete, but not that clean.

My Calling- To Grok

For Rick Schonfeld of the American Red Cross (and other random readers):

We were having a discussion about how we, the students in CRASAR, got into computer science and robotics. I said my reason was I had found my calling in CS in high school, and you asked what my calling was, specifically.

I said that it was system design- starting with a blank slate and a desired functionality and producing a working system. That’s a bit of a lie.

Designing systems is one of the things I enjoy, but developing a working system (something from nothing) isn’t quite why I like it. The actual reason is the understanding that comes with being a systems developer. I have a friend (and former coworker) that when asked how well he understood computers (part of a “what should be a students first language” debate) replied “I understand it down to the level of electrons moving… I stopped at why the electrons are moving, but above that I understand.”

Being a computer scientist teaches you how to think; it teaches you how to analyze a problem and how to come to a solution. Once you have a solution in mind, you learn to implement it. When you implement a solution, if you are worth your salt, you don’t just know how it works in vague and abstract terms, but rather you understand it. You grok it. You can watch it run and envision each individual component operating, running along at 2 billion operations per second (or merely 1 million depending on your platform). You can envision it flowing as one fluid entity, existing within its own reality.

Groking how something works is an amazing experience (similar to an ‘Ah ha!’ moment, but more awesome and less fleeting). Learning, understanding, and groking new and complex systems is why I am glad to be in Computer Science; and being exposed to new physical and electronic systems is why I am glad I am a Systems Developer.

It can also be quite terrifying. I designed and developed the software for a frac pump control system from scratch. I raised it from a wee tike that could barely blink LED’s to a system now deployed on million dollar industrial vehicles. I know that it works, and I grok it. But, every once in a while a problem comes up that doesn’t leave you wondering what happened, but rather exclaiming “That’s impossible.. that’s not how this works”. So far when I’ve run into these situations, it seems I’ve been right (odd mechanical or deployment issues), but each time your heart skips a few beats as you’re forced to question yourself, your understanding, and the basis of the reality your system is built around.

But, it is worth it.

Put a different way, a common joke is that real programmers use a magnetic pen to write data onto a hard disk. Somewhere out there are people who, given sufficiently steady hands and sufficiently small points to their pens, know enough about how a hard drive works that they could write data that way. There are people out there that, if you give them the raw electrical signals off your ethernet cable pairs, they can tell you what website you are surfing. These are those that grok.

Thanks to Rick for asking a question I probably should have known the answer to already.

Communication Issues in “The Challenge of the Multicores”

The Talk

A few weeks ago I listened in on a lecture given by Fran Allen, a Turing award winner and a language/compiler/optimization researcher at IBM.

The talk was on “The Challenge of the Multicores”. It meandered a bit into the history of 60’s era IBM machines, a few reflections, and a basic explanation of the problem being encountered with an emphasis on scientific computing. I think- the lecture was rather unfocused, and a few weeks ago, so memory=dead. Part of the lecture was a list of solutions to the multicore challenge:

  1. Cache needs to die a bloody and well deserved death
  2. We should move away from low level general purpose languages (C/C++/Java/Python/etc)

(Humorous tilt added by me, but otherwise this is roughly what she said)

I’m not sure if there were more, but these two stuck in my head because of what ensued. With the elimination of cache, you could hear almost every engineer in the audience groan in horror, and roughly similar may have occurred with the death of low level languages. The questions at the end of the seminar were roughly 90% “How do we handle the discrepancy between CPU speed and memory speed without cache”, “How could any low level general purpose language be completely displaced by a high level language”, etc. This was a classic issue of miscommunication.

The Death of Cache

With the first point, Fran isn’t saying “Get rid of the cache and have the CPU talk directly to memory” (or she might be, I’m not sure how well she understands hardware concerns). What she is saying is that how cache is implemented today is a large source of performance problems. Cache is operated largely without instruction from the program- it makes educated guesses about to load, and what to keep. When it guesses wrong, a cache miss occurs and a massive microsecond or so overhead occurs while memory is fetched.

Here is the problem: cache guessing is based largely on assembly code and current use. A program can’t tell the CPU ‘I’m going to use this data soon’; a compiler can’t provide meta data to the CPU of the form ‘by the time this method finishes, we’re going to need data from one of these three locations’. Lack of certainty about what the cache will do impedes automated optimization. That is what she was getting at. If instead we were able to treat the cache as 4MB worth of really slow registers, with it’s own lesser instruction set and controller, we could see massive performance gains as we can get much closer to a perfect cache with an intelligent cache controller then we could with a controller that only guesses based on current and previous state.

When phrased that way, cache death doesn’t seem like such an incredibly terrible idea. The cache is still there in some form, it’s just not the cache of today. There are still all manner of issues that would need to be worked out, but it’s alot more clear that “yes, you may have a point there”.

The Death of C

Again, she isn’t saying “C should be eradicated from everywhere and everything”, she’s saying that using a low level language removes the semantics of what you’re trying to do. The best example I can think of would be the difference between using SQL, and hand coding a query over a data structure. (Ignore small issues like ethics, responsibility, pain, etc).

If you encode a query yourself, you are responsible for parallelizing it, and doing that correctly. That means you need to keep in mind data dependencies, lock acquisition orders, etc. You may need to take into account niggly little details about the machine architecture you’re working on if you want massive performance. If you have a sufficiently complex query, it might be worthwhile to pre-compute certain parts of the query and reuse them. And so on.

With SQL, instead of specifying the steps to get to the state you want, you specify “I want everything that looks like x, cross referenced with everything that feels like y, ordered by z”. Your database server is then responsible for building a query execution plan. It asks “what can I precompute to make this faster”, “Do I need to do a linear scan through this entire data set, or do I have meta data somewhere such that I can do a binary search”, etc.

SQL constrains what you can ask for, but in return it exploits all manner of optimizations and parallelizations, without your lifting of a finger. Fran’s focus is on high performance scientific computing. If the people doing a weather simulation say “I need the solution to Ax=b” rather then coding a matrix inverse or an LU decomposition by hand, the compiler can provide a very efficient implementation, which may be able to exploit optimizations that the programmers could not see. She isn’t saying “Throw out C#, write your web applications in MATLAB”; but she is saying that current practice makes it hard for optimization researchers to help your code run faster.


Communicate better. It’s better for you, your message, and us.

Seven Segment Thermometer

Hey look, a project!

Overview of Thermometer + 7 segment display

Overview of Thermometer + 7 segment display

While I’m waiting for everything to align perfectly for my descent into robotics, I’ve built a thermometer for the dining room. Three times. I’ve now gotten it to some form of a stopping point.

Pieces and Parts

This uses:

  • 4x Common Anode 7 segment displays (got from Allied Electronics)
  • 2x A6276- a 16 bit constant current (sink) LED driver (from somewhere)
  • 2x 10k resistors
  • 1x DC Boarduino (from Adafruit)- Arduino clone intended for breadboarding use
  • 1x Prototyping PCB (from RadioShack)
  • 1x TMP36 (also from Adafruit)
  • More 22ga solid core hookup wire then I care to remember

The A6276 is pin compatable with the STP16C596, the apparent Arduino LED driver chip of choice. It was acquired in efforts to fix the legendary Proton LED sign- sadly (or fortunately) the chips on the sign were a different form factor, and the A6276’s were donated to my evil causes. The chips work as a constant current sink; when a pin is set high, it effectively opens a current limited channel to ground; when set low the pin is at high impedance.

The Boarduino was originally intended for use in a cable tester due to it’s awesome size (cable tester may be a different post… or just a set of images). Strange things happened with that, and it is now serving as an overpowered thermometer.

Displays of Seven Segments

Technically the displays are eight segments including the decimal. Which works out nicely for being driven from a 16 bit driver. This 4 digit display is the main reason that the thermometer is getting posted, and here’s why: I’m using a protoboard and 4 7-segment displays. There’s no direct/simple way to connect the LED driver’s to the pins of the displays. 32 pins need to be connected for the displays. 32 small wires. 64 solder joints. In a small space. I present: pain.

Pain Defined

Pain Defined

There’re probably also a great many samples of ‘bad soldering’ in that photo, but I’m ignoring that for now. Why? Because of this:

It Works!!!

It Works!!!

I’m 90% certain there is a much better way to do this (Possibly a 4 digit seven segment display?). But it works. And it is awesome.

The wiring for this part, other then being tedious and made of pain, isn’t complex. To prevent excess pain, I have the hookup wires always run parallel. What this functionally means is the pin out on the right side of an LED driver chip is reversed on the left side. This requires some special handling in the software, and is not necessarily ideal, but, we tossed out ideal with the 40 hookup wires. Ignoring the reversing bit, each display has it’s pins hooked up in the same order to a given 8 pin side of the driver chip. The rest is basic stuff- serial out on chip one to serial in on chip two; latch, and clock are shared between both chips, chip output (the R-EXT pin on the datasheets) is a 10k resistor to ground, etc.

I would actually like to do this again as an intro to PCB design project. When I can find time to figure out how to use Eagle.

Sensing Temperature

Temperature sensing is straightforward- the TMP36 sensor is supposed to put out a 0V-someV signal based on the temperature it is currently at, following a roughly specific formula (outputting 500mV at 0C, and increasing at 10mV per degree C). Just a simple analog read and conversion. Roughly.

Close up on the Breadboard

Close up on the Breadboard

You can see the TMP36 at the bottom of the pic, snuggled next to 0.1uF capacitor as the spec sheet requests. I tried this without the capacitor and tended to get strange readings quickly (within a few minutes). With the capacitor, they tend to be less frequent (that said, no matter how bad the insulation in my house, it was not 4F last night. I refuse to believe). I started taking a median of several readings in order to solve the issue of strange outliers, and it appears to have worked. No ghosts causing 4F readings yet.

One other quick comment though: The TMP36 outputs 10mv/C roughly. Due to the fact that the Arduino is using a 10bit ADC converter over a 5V range, it effectively reads in 5mv increments. Therefore, we can only read the temperature to an accuracy of about 1 degree F. Which is a bit sad, as it makes those last decimals of accuracy meaningless. Alas.


I’ve built a thermometer! With (reusable) seven segment display! For more cash then I care to think about! I’ve had two other variations of this before: outputting current temperature to a set of normal LED’s in binary (fun, simple, slightly confuses relatives, need right LED size to make it nice), and to an LCD (can display high/low/current readings simultaneously, and be read by non techies). A few other variations could be done relatively easily: an indoor/outdoor meter, monitoring multiple rooms, temperature loss in ducts from straight out of AC to last outlet, etc.

This and a few other projects I’ve done fall under the ‘can be done with 3 output pins and maybe an analog read’ category of projects. I’m plotting to start playing around with the ATTiny series of uC’s to get the project down to component parts / decrease the requisite bulk. We’ll see how that goes.


Small update. I changed the analog temperature sensor to a digital sensor, the DS18B20 (from Sparkfun), using a set of libraries for the sensor (from Miles Burton). The nicest thing about the new sensor is that it has an onboard 12bit ADC, which lets it sense temperature accurately to the last tenth of a degree that I was wanting. It also doesn’t appear to suffer from the same noisy readings I was getting with the TMP36. The only downside is that it takes 3/4’s of a second to get the 12 bit value and send it back to the Arduino. Which is fine for what I am doing, but it doesn’t seem ideal for other applications.

One interesting other bit about the new sensor: it can operate on only two lines; a ground line, and a data line. The data line is supposed to be connected (via a 4.7k resistor) to a voltage source. When the sensor wants to send a 0 to the uC, it connects the data line to ground, and the uC reads 0. When it wants a 1, the data pin becomes high impedance, and the uC reads a 1 coming from the pull up resistor. I assume it has some kind of in-built capacitor to keep it powered through the 0-bits. Maybe? Probably.

AI Project- Sudoku Through Constraint Satisfaction

One of the final projects in the AI class is to teach middle schoolers about an aspect of AI. I chose to teach about the fun and joy of constraint satisfaction through Sudoku. Why constraint satisfaction? Because that is one of the first AI topics I came across in the class and thought ‘Wow, this is awesome… and I haven’t seen it twenty times before’. It is a solid method for solving a specific class of problems.

To see it in action, look here. Source code for everything (from the simple (and terrible) php webpage, to the Constraint Satisfaction Solvers, is available here.

I’ll warn though that the code isn’t completely clean. A fair amount of the code in the various solvers could have been implemented… differently. Such that it wouldn’t be a copy and pasting between the ~5 solver implementations.

Also, the way possible values are determined is a horrible kludge, that I should have known better then to do. Part of the goal of the Solver implementations is to make the intermediate states visible. To make that simpler, no actual recursion is used in the course of searching. Since no recursion is being used, it’s harder to make changes to intermediate state, such as ‘we need remove values 5 and 6 from the possible values for cell 15’, and then undo those changes. Being lazy, I opted to just generate the possible values for all cells on each step of the algorithm. Terribly inefficient, but, honestly, that’s not a problem for this. Drawing out the entire board on screen probably takes much more time. What was unfortunate was when I went to try and encode Forward Checking / Arc Consistency, I needed the Variables to store their possible values in order for the code to be not terrible. As such, Arc Consistency turned out to be rather messy, and surprisingly useless on the small Sudoku grid (or I did it wrong; but surely that is impossible), so I scrapped it and moved on. But, overall, the implementation would be more robust if the variables were responsible for storing their individual values. And if I didn’t need those intermediate states.

In conclusion, it was fun, I learned things, hopefully middle schoolers will also learn things, or at least lie and say they did, and I now have other finals I need to worry about.

Special thanks, again, to Peter Norvig’s Solving Every Sudoku Puzzle. It provided for entertaining quotes and thoughts, and the inclusion of a 62 digit number in the final website.

ACM 2008 SCUSA Wormholes- 2nd Edition

Last time we left our time travelling hero’s, they were using a repeated applications of the Bellman-Ford algorithm (at time complexity O(|V| * |E|), where |E| is on par with |V|, and number of requests is probably also on the order of |V|, giving a total of O(|V|^3)). Our valiant programmer desperately seeks to apply an the Floyd-Warshall algorithm to the problem, just so that the time spent reading the analysis of algorithms book isn’t wasted.

The secret to successfully applying the Floyd-Warshall algorithm to this problem is the same secret to getting reasonable performance out of the previous solution. You can’t look at each individual year, you instead must look at time spans between meaningful events. That brings the number of vertices down from many thousand to at most 301 (starting year + 200 distinct wormhole years + 100 mission years). It has the downside of making the process of finding weights a fair bit more complex:

public int[,] BuildInitialWeights(List distinctVerticies)
    int[,] weights = new int[distinctVerticies.Count, distinctVerticies.Count];
    for(int i=0; i<distinctVerticies.Count; i++)
        for(int j=0; j<distinctVerticies.Count; j++)
            if (i == j) weights[i,j] = 0;
            else weights[i,j] = Int32.MaxValue;
    //assumes vertices are in order, from first year to last
    for (int i = 1; i < distinctVerticies.Count; i++) { int start = distinctVerticies[i - 1]; int end = distinctVerticies[i]; weights[i-1, i] = end - start; } foreach (int[] years in wormholes) { int i = distinctVerticies.IndexOf(years[0]); int j = distinctVerticies.IndexOf(years[1]); int weight; if (years[1] > years[0]) weight = (years[1] - years[0]) / 2;
        else weight = (years[1] - years[0]) / 4;
        weights[i, j] = weight;
    return weights;

Here we’re building a 2D matrix of weights. Informally we’re mapping each year in the distinct vertices list to its index, then using that index as coordinates in the weight matrix. Weights are constructed by first setting all weights to their default values (0 to get to current node, max value for other nodes). Then default weights for moving forward with time are assigned. Finally, wormholes are accounted for. Wormhole accounting is done last as it will be, by definition, the shortest path between two points. This has the advantage that only one path is ever considered between two points, where as the previous solution allowed for multiple paths.

public List BuildDistinctVerticies(List addedYears)
    List years = new List(addedYears);
    foreach (EndPoint wormhole in wormholes)
    for (int i = 1; i < years.Count; i++)
        if (years[i - 1] == years[i])
    return years;

This is the simple method for building all distinct vertices. It starts by taking all mission and starting years, adds each end point from a wormhole to the list, then sorts it and removes duplicates. Again we get into ‘it would be nice to have a Sorted Set’ (Should appear in .NET 4; or, alternatively, has always been in Java… accursed Java with its nicer Collections libraries…).

public int[,] AllPairs(List missionYears, out List distinctVerticies)
    distinctVerticies = BuildDistinctVerticies(missionYears);
    int[,] currentWeights = BuildInitialWeights(distinctVerticies);
    int[,] nextWeights = new int[distinctVerticies.Count, distinctVerticies.Count];
    for (int i = 0; i < distinctVerticies.Count; i++)
        Extend(currentWeights, nextWeights, i);
        int[,] temp = currentWeights;
        currentWeights = nextWeights;
        nextWeights = temp;
    return currentWeights;
public void Extend(int[,] previosLevel, int[,] thisLevel, int k)
    for(int i=0; i<previosLevel.GetLength(0); i++)
        for(int j=0; j<previosLevel.GetLength(1); j++)
            int iToJ = previosLevel[i,j];
            int iToK = previosLevel[i,k];
            int kToJ = previosLevel[k,j];
            if (iToK != Int32.MaxValue && kToJ != Int32.MaxValue)
                thisLevel[i,j] = Math.Min(iToJ, iToK + kToJ);
            else thisLevel[i,j] = iToJ;

I’m about 90-100% certain I misspelled previous. Ignoring that, here we can see Floyd-Warshall algorithm implemented. I’m still wrapping my head around how it works; but my best understanding is the ‘extension’ method is simulating whether or not the shortest path from i to j will go through k. There’s a bit more to it, but it’s too early in the morning to worry about that.

The distinct vertices are passed as an out parameter to the caller, so that they have the map from index to year available to them.

List years = new List(missions);
int[,] distances = timeline.AllPairs(years, out years);
foreach(int destination in missions)
    int i = years.IndexOf(startingYear);
    int j = years.IndexOf(destination);
    int toDistance = distances[i, j];
    int fromDistance = distances[j, i];
    if(toDistance==Int32.MaxValue || fromDistance==Int32.MaxValue)

Finally, the relevant changes to the problem solving method. The down side to using this approach is that there isn’t a direct mapping from year to index; so a linear search is performed to get that mapping. It’s not really an issue, considering what else is being done, but it is a bit unfortunate.

Psuedo-analysis might be more accurate. Comparing this to the previous solution, I like the fact that the algorithm seems more straight forward, and that in the end we end up with a massive table of possible paths, should our time travelling agents ever need to do strange other things. It also has a slight performance advantage over the straight version from last time. (It’s about on par with not continuing the Bellman Ford algorithm if no relaxations occur, though that might be in part due to poor implementation of that algorithm).

That said, in its current form, it adds about 30 lines of code, and the indirect mapping from index to representative year feels ungood. In the end, the question for ACM problems is ‘which could you code fastest with the lowest probability of error’, and looking at the two, I think the Bellman Ford would be a bit easier, specifically because it uses a series of edges, rather then a weight table. I could be wrong though, and the performance level does seem to be in favor of Floyd Warshall.

Side Rant
I seem to be rather poor at a very specific bit of contestifying. That is seeing points where you can discard unimportant information. In this problem, it was my attempt to treat every year as a vertex. In a problem about a disk defragmenter, I was tempted to treat the disk as a series of sectors rather then a series of filled and unfilled ranges (would have massively increased computation time to simulate, and probably over complicated the code; didn’t get to find out due to contest ending). I find that this seems like a common theme to me: I constantly attempt the simplest solution that could possibly work, and these problems are more often then not tuned to make that ‘the wrong way’. I suspect this comes from my many years of employment, where you’re not necessarily seeking the best solution, but rather just something that works well enough to get by. That is, of course, part of why I started this crazy blog: to get better at recognizing when I’m going down an incredibly wrong path.

Full Source Code for this solution.

ACM 2008 SCUSA Wormholes

South Central USA has to be one of the odder acronyms in common use… a single character away from SCUBA no less.

The last time I went to ACM, this problem was delegated to one of my team mates, and for one reason or another we never got it solved. I was reading through an Analysis of Algorithms book and came across a section on All Pairs Shortest Path and thought ‘This would apply perfectly and elegantly to this situation’. (I was wrong, but we’ll get to that soon). (Also, judge data can be had at this location, and I have been cheating miserably).

You work at Time Travel Inc. in the payroll department. Sick of keeping up with all the math related to time travel, you contracted yourself (while in college) to develop an application to determine how much a person has aged during their time travel, and whether or not their last journey was even physically possible. More specifically, time travel is acheived by portals which appear at specific moments in time, and exit at other specific moments, either aging or youthifying the human in the process, according to a specific formula; and ignoring the concept that you could die, or be negative years old.

This is a graph traversal problem with negative weight directed edges. The goal is to find the minimum weight path that takes you from a start, to a destination, and back to a start; or equivelantly, two graph traversals.

As I said, I had read a bit on All Pairs Shortests Paths and was interested to see how it would handle this. Specifically, I was thinking ‘if you treat each year as a vertex, then each vertex has a weight one edge to the next year; and if a wormhole is present, an edge to that destination year. At this point in time, I thought the input was constricted to a few hundred years near the 2000’s (where else would you time travel to?). The actual range of years is from 1-10,000. 10,000 verticies, 2 10,000 x 10,000 matricies for storing path weights… no. Not gonna work. (As we’ll see in a bit, this is actually a bit of a lie).

Still curious to further my learning, I read up on the Bellman-Ford algorithm, which is a single source shortest paths algorithm. It works on the assumption that the absolute longest path will go through every single vertex in V, and therefore you can determine the minimum cost to get anywhere by going through each edge, relaxing the edge’s destination weight if appropriate, and then repeating that |V| times. Complexity of |V|*|E|. That actually sort of worked, in the sense that it produced an answer for most problems in under a minute. But, under a minute isn’t quite ideal.

To get reasonable response times, you can’t simulate every single meaningless year. You have to focus on only the meaningful ones, and declare edges between them. If you’re in 1950 and want to get to 2050, yes, you can look at it as 100 1 year edges, but it would be less pain to look at it as 1 100 year edge… or, if a wormhole happens to open up in 2012, a 62 year edge followed by a 38 year edge. The downside to ignoring meaningless years is this adds a fair bit of complex non intuitive code, just for the purpose of filling in the ‘time moves forward at a constant rate’ bits.

With that said, you could apply the same reasoning to all pairs shortest path, so long as you only kept track of the vertexes you actually cared about.

Source Code
As this one is a tad bit long, and far from being the greatest, I’m only going to post bits and peices of it.

public static void ProcessProblem(int id, TextReader input)
	Console.WriteLine("DATA SET #" + id);
	Timeline timeline = new Timeline();
	int wormHoleCount = ReadInt(input);
	for (int i = 0; i < wormHoleCount; i++)
	int startingYear = ReadInt(input);
	int missionCount = ReadInt(input);
	for (int m = 0; m < missionCount; m++)
		int destination = ReadInt(input);
		int distance = timeline.ShortestPath(startingYear, destination);
		int returnDistance = timeline.ShortestPath(destination, startingYear);
		if(distance==Int32.MaxValue || returnDistance==Int32.MaxValue)

Above is the primary problem solving bit. We create a timeline, register wormholes, then go through each mission and determine if the destination can be reached from the starting point, and if the starting point can be reached from the destination. Note that we’re using the max value of an int as a ‘no path’ return code.

class Timeline
	private List<int[]> wormholes=new List<int[]>();
	public void AddWormhole(int[] wormhole)
	public int ShortestPath(int sourceYear, int destYear)
		List edges = new List();
		List vertexes=new List();
		vertexes.Add(sourceYear); vertexes.Add(destYear);
		foreach (int[] wormhole in wormholes)
			edges.Add(new Edge(wormhole[0], wormhole[1], true));
		for (int i = 1; i < vertexes.Count; i++)
			if (vertexes[i] == vertexes[i - 1])
		for (int i = 1; i < vertexes.Count; i++)
			edges.Add(new Edge(vertexes[i - 1], vertexes[i], false));
		Dictionary<int, int> distances = new Dictionary<int, int>();
		distances[sourceYear] = 0;
		for (int i = 0; i < vertexes.Count; i++ ) foreach (Edge edge in edges) { if (!distances.ContainsKey(edge.start)) continue; int distance = distances[edge.start] + edge.weight; if (!distances.ContainsKey(edge.end) || distances[edge.end] > distance)
				distances[edge.end] = distance;
		if (distances.ContainsKey(destYear))
			return distances[destYear];
		return Int32.MaxValue;

And here we have the meat of the problem. The graph traversal algorithm/timeline class. The first portion of the ShortestPath method is soley responsible for building a list of edges (note, third parameter to edge constructor is ‘isWormhole’). Once that is constructed, the Bellman Ford algorithm is repeatedly applied to get the distance from the start point to every other point. Bellman Ford performance can actually be improved slightly by not continuing to iterate if no relaxations were needed on a given iteration. On my machine, execution time went from about 3s to .5s. (Also, as a fun aside, note the sad lack of a .NET Set class).

That’s all for now. I think I’ll reimplement this with an all pairs shortest path just to see if it would meet time constraints on execution; and make a post about that later.

Full Source Code for First Solution.

Next Page »