Posts in the Game Development category

Announcing the Microsoft @ GDC App

image003

So this is nice Smile My Windows Phone app just hit the Windows Phone Marketplace (you can find it in the marketplace under Microsoft @ GDC – or just click this link to install it directly on your phone). It’s a little app that gives you details on all of the conference sessions and talks that Microsoft are sponsoring, putting on, or otherwise giving at GDC this year. (more...)

$10,000 Is The Magic Number

Hi there, #AltDevBlogADay denizens! I’m Simon Cooke, and this post is all about how to budget your game in preparation for pitching to a studio. How do I know anything about this? Well… Back in 2008 I was incredibly lucky to join X-Ray Kid Studios (a group of very talented people involved in the creation of Google Lively, and with a  history in comic books, animation and video games going back decades) as their Director of Engineering. As time went on, I also shifted sideways into the role of Business Development Manager, where I helped take us from concept to pitching a number of game titles to several publishers. Unfortunately we ran out of steam before we could sign a deal, but I learned a lot along the way. (X-Ray Kid is still around though – it just morphed). (more...)

The BipBuffer Rides Again

So a while back, I invented a datastructure I call the Bip-Buffer. It's a rather neat little thing that if you pump data through it fast enough works pretty much as fast as a true circular buffer. It has a nice data-access pattern too, based around a reserve/commit transaction to push data in one side, and to pull data out of the other side.

It works. Lots of people use it. Some people claim they did it first back in the 90s on the Amiga, but they didn't publish. (Which means, sadly for them, they don't count. No point inventing something if you don't share the knowledge). The price you pay for fame and fortune is letting other people know what you did ;-)

The moral of the story? Give your algorithms unique names. You'll be able to see where they proliferate to. As a result, I can stake a claim on my own tiny little corner of computer science.

Anyway, this is all beside the point. Why am I posting this?

Because Age of Empires Online uses my BipBuffer code & algorithm. A friend of mine is working on the source right now, and he sent me a quick email to let me know it was in there. It's great to know that it's being used :)

(And because he sent me an email, he's actually meeting terms of the licensing agreement :-) )

(more...)

New Blog from Jason Weiler (Airtight Games)

My good pal Jason Weiler has posted up his first article on his new blog - Myopic Topics (http://myopictopics.com).

This one's the first part in a series about writing an add-in for Visual Studio's Evaluation Engine (the bit that tells the locals/autos/watch window how to format and display variables).

He's also the author of the FName-Addin for Unreal Engine 3. This is an awesome, donation-ware tool for Unreal Engine, which allows you to view the actual string values of FNames in the watch window. It's an absolute must-have for anyone doing Unreal development - so download it right now!

And if you do download it, please, make a donation. I pay $150 or so (plus renewal fees every year) for Visual Assist - if I'm working on Unreal Engine, I'm pretty sure that I can throw $50 his way for something which massively takes the pain away from Unreal development.

Anyway, check it out and subscribe to his blog! He's one of the good ones :)

(more...)

Quick rules for Cache-Friendly Algorithms

Games programming on current architectures require that a certain amount of attention be paid to your algorithms to make sure that they’re cache friendly.

It takes a long time for a memory access to be brought back from a page that isn’t in the cache. And cache lines are getting longer and longer.

Interestingly, software engineers solved this problem a long time ago, with algorithms meant for databases with really really slow IO pipes, and disk sector sizes of anywhere from 128 to 1024 bytes.

Those sizes should be a big hint to any seasoned programmer; they’re similar to the size of cache lines in modern systems. Which means that we can take algorithms that were designed to work really well for those old systems and later discarded, dig them up, dust them off and use them to create really blazingly fast routines for handling in-memory data.

Everything that’s old is new again.

So without further ado, here’s a few recommendations:

Prefer M-ary trees to Binary or Red/Black trees.

M-ary trees store several keys per node. Binary trees only store one key per node. If you store as many keys per node as you can fit into a single cache line, you’ll benefit.

If data you need to search changes rarely, use Indexed Sequential Access

This is a bi-level lookup scheme. It’s related to M-ary trees.

If your data needs to be flexible and searchable, use B-Trees

B-trees are M-ary trees, but you don’t require that each node has M items.

If you really must use a hashtable, consider using Extendible Hashing

Extendible hashing combines the properties of B-Trees, Indexed Sequential access and Hashtables to help keep pages coherent.

If you need to do lots of data filtering, and then perform the same operation on each object, use a Gather/Scatter approach

  1. Filter your dataset using a given set of criteria
  2. Generate a sequential list of matching objects in scratch space.
  3. Prime the cache as you start processing the list, and keep pumping it every few objects.
  4. Generate a sequential list of results if further processing is necessary, or Scatter the results back out to the objects (priming the N+Wth object as you go; you’ll need to tune W based on the size of your cache).

Consider Batch Sorting Operations instead of Quicksort

Don’t use Quicksort. If you have a large dataset, you’ll quickly run into cache issues as it bounces all over the place. Use Merge Sort instead (look it up in Knuth if you don’t remember that one) – which also has the nice property that you can run it in parallel on multiple cores.

Shadow Your Most Accessed Data Inside Your Keys When Necessary

Store data you need for your most common comparisons alongside the pointers to the objects – make them part of your key. You can ignore that part of the key if you’re searching the database for anything else, and your most common operation will be optimized because you won’t have to hit a random chunk of memory to pull out the value for comparison. (Good candidates for this might include an NPC’s location and bounding-sphere radius, which could be stored alongside the pointer to the NPC itself).

Defrag / Sort your Data On the Fly

As you operate on data, if you keep a secondary buffer around, you can generate a new data set. At the end of the operation, you just swap the pointer to the first dataset for your new one. This massively reduces cache thrashing, and if you plan it right and your data is temporally coherent, if you perform a sort of the data each frame, you’ll keep most of it sorted naturally for minimal cost – it just kind of drops out in the mix. You can always perform this check as you go along, and only re-sort if necessary. (If you’re clever, you can even figure out if you just need to resort the current cache-line, or if you need to be more exhaustive).

Another advantage here is that your secondary buffer? You don’t need to synchronize on it. Your first buffer? You don’t need to synchronize on that either, so you can get MASSIVELY parallel. You’re performing something akin to finite element analysis on your dataset. The only time you need to worry is when one calculation affects the output of the other – but if you make a list of those, you can special-case them.

If you’re dealing with small chunks of data, go Linear

With small amounts of data, the cost of accessing main memory to pull it into the cache massively dwarfs the cost of performing calculations on the data.

So don’t be afraid to use bubble-sorts, or linear searches on data that’s tiny and doesn’t use indirection. Of course, use this in moderation, and test first.

Further Reading

Look up “Batch Processing Algorithms” and “External Searching” on Google.

Read Database books. I personally can recommend the “Inside SQL Server” books for ideas on how to handle slow memory access (just take anything that refers to disk IO, and replace that with cache-line).

(more...)