About two weeks ago I mentioned I was looking into using the open source library, gameswf to integrate into our little 2D engine for iOS. I was at the point where I needed a UI solution for Outwitters, and I had a few options:

  1. Keep doing what I was doing with Tilt to Live and create the UI by handing coding everything and procedurally animating it
  2. Create a UI editor to help automate some of the UI features I had created in Tilt to Live
  3. Look into gameswf and see if I could go for the whole she-bang and just use Flash for the UI.

I’m happy to report the gameswf seems to be doing just dandy on iOS. When researching, I was hearing a lot of complaints on performance and compatibility (it’s only compatible with actionscript 2.0), and if you’re using a lot of complicated vector shapes you can bog down the FPS. Well reading the post date on a lot of those articles I found that some were pre-dating the 3GS! So in the end, I just put aside a week to mess around with gameswf on an ipad 2 and try to convert some of the ‘hand coded UI’ in our prototype to using flash.

Exhibit A

Just a disclaimer, all that you see in this video is a work-in-progress and extremely early. It was mostly a mock up to play around with the UI.

So in the video you see the main menu with the Outwitters logo, some buttons, a blue blade, and a background illustration being animated. This is entirely made in flash with pngs and some static text as buttons (just a full screen swf). As I press a map to load a pop up blinks into existance asking if I want to resume a previous game or not. This pop up doesn’t have any external bitmaps or pngs in it, it was made as vector shape rectangle with curved corners and some static text (which actually are bitmaps, but this is handled internally in gameswf). And finally, the mock up game board screen you see a flashy banner come across the screen announcing the player’s turn and shimmering, and then pulling away when the player taps the screen.

The animations weren’t anything too special as my flash skills are pretty horrible these days, but hopefully one can see that in the right hands this can be an incredibly useful tool to making games look that much prettier and be more engaging.

The best thing that’ll come out of this, is now the artist, Adam, has a large amount of creative control of how the UI will look and feel. Before it was heavily on the programmer side to make things work, and the fidelity suffered for it.

The Downside

Now for the things that might be problematic going forward:


Nothing has changed about this as far as I know, but images are required to be sized at powers of 2. When your working with the lower level api’s this isn’t a limitation that is difficult to overcome because you have so much control over what’s being loaded and how it’s organized in memory (for example, texture atlases). With gameswf, images are handled internally and they do 1 of two things:

  1. If the image is a non-power of 2 size, it will be resampled into a power 2 size using a software-based resizing algorithm. This can be a rather expensive performance hit if you don’t take this into account. The side effect as well is images that go down this pathway end up being rather blurry
  2. If the image’s height and width are a power of 2 then it uses the image directly.

My current solution? Just making image sizes a power of 2 before importing them into the Flash library. Yea it’s a lot of wasted space, but given how the UI functions currently, they aren’t long-lived objects so the memory is freed shortly after the user dismisses the screen or moves on. Of course, in the actual gameplay screen any flash-based UI elements have to be optimized. Sprite sheets aren’t exactly feasible in the traditional sense, but you can emulate sprite sheet functionality in flash by creating a movie clip that is masked differently on each of it’s frames. It’s tedious, but I imagine if you’re needing the extra memory savings this could be the way to go. There are a few other memory saving techniques I haven’t explored, like shared libraries that might provide some benefit as well.

In any case, if you’re targeting really low hardware (3G and below), you’ll most likely run into memory issues. The iPad 2 handled it just fine, but I imagine once I get down to the 3GS model and iPad 1 I’ll have to tighten my belt on the memory usage a bit, but it certainly seems doable.


I had spent about a week coming up with a small body of photoshop scripts in conjunction with TexturePacker to help streamline our process for creating assets at different native resolutions. The upshot is we have the assets. The downside is now they can’t be used as-is. I don’t have a ‘swf generator’ to generate iphone, retina, and ipad resolution swfs. So right now it’s looking like it’ll be a completely manual process of us creating separate swfs for each platform/resolution. Of course, if our swfs were 100% vector graphics, this wouldn’t be an issue :).

I’ve pondered some possible solutions, but haven’t experimented with any yet. One idea I had after browsing some of the source was using the ‘scaling’ feature of gameswf to scale the rendering of swfs, but inside the actual flash files do some actionscript magic to display the correctly sized graphics in the UI. Not sure how much tedium this would save in the long run without some sort of automated way to do this. On one hand, Adam or I would be laying out 3 individual flash files to be used. On the other, we’d be laying out a single flash file but then I would be scripting the resolution changes in the file itself. I would have to come up with some sort of ‘utility’ style movie clips in the flash library to help with this, but maybe it could work better than managing 3 files for every UI screen?


Given the tests I’ve done so far, there’s nothing there that has lead me to believe that doing a commercial quality game on the iphone/ipad with gameswf isn’t possible. In fact, I’m sure it’s probably been done by now and it just hasn’t been talked about? Which is kind of strange, considering how awesome it is. But maybe that’s just me geeking out. This isn’t so much as a concrete downside as it is more of an ‘unknown’. My concern is once we’ve got a lot of in-engine animations/particles running how will it fair with a flash player running the UI. The other caveat is Outwitters lends itself really well to this type of setup as it’s not a very fact paced game. So this solution might only be feasible for particular types of games (in the same way that using UIKit and CoreGraphics instead of OpenGL is feasible for some types of games). I guess only time will tell.

The Upside

Some of the positives of using gameswf particularly:

Lazy loading of images

When a bitmap is being rendered inside gameswf, the texture isn’t sent to openGL until the first time it is drawn. It’s a nice thing to know. We can kind of ‘stuff’ our flash files with image assets and not worry about them taking up tons of memory at runtime as long as we don’t render it all at once. This is useful for HUDs that are themed, in our case, by team colors, shapes, and logos. A point of caution is the the image *isn’t* unloaded from memory (even if you call an unloadMovie() from ActionScript on an externally loaded flash file).

Auto-rotation support seems feasible

Prior to this development I haven’t really considered doing anything too extravagant with support for auto-rotate. If we did support it, it would simply switch to the correct orientation and be done with it. But now it seems pretty trivial to have the UI do a fun but subtle animation as it stretches, squashes, or moves over to accomodate the different screen real-estate. It’ll be a ‘nice touch’ if we get that working.

It isn’t an all-or-nothing setup

Giving the sample project of gameswf for iOS a quick look it seemed at first that running flash was an all-or-nothing deal. It was initializing in the main method and pretty much took over how things were rendered onto the screen. This part was time consuming but after experimenting around with it a few days, I got gameswf to play nice with my already established framework for game screens and UI. Each screen in our game is just a subclass of a ‘Screen’ class that houses ui widgets and any logic particular to that screen. I was able to refactor a few things, and now I have just a subclass called ‘FlashUIScreen’ that takes in a flash file name and a ‘controller’ class as constructor parameters. It now functions and renders identically to anything I previously did. The point being, that if the need arose for something to render in-engine (perhaps using game assets already in memory for some sort HUD) I can still fall back to my previous method of hand coding UI. Hopefully that will be the rare exception than the norm.

Some Things To Know

It’s been a quick 2 weeks since I decided to investigate it and there’s still a few things left to answer, but currently I’m moving forward with the assumption we’ll be using gameswf for this project, and that is a fun thought to entertain :). For anyone else that is curious on getting gameswf up and running here are just some things I ran into while working with it:

If you’re loading separate movie swf files you’ll need a separate gameswf::player instance. I first went with the approach of having a single player and calling load_file() on anything I needed to render because I didn’t completely understand the relationship between a player and a gameswf::movie isntance. But the player tends to house the actionscript VM for a “root movie”, so loading 2 movies from a single player gave me some wonky actionscript behavior.

If you delete all the gameswf::player instances at any point in time, the library goes through a ‘complete clean up’ phase of gameswf. This game me some issues when I would transition from a main menu to an all OpenGL view (no flash UI) so the swf players were delete, and when I went back to a flash-based menu the app would crash with an EXC_BAD_ACCESS error. I got around this by instantiating a single gameswf::player instance that didn’t load any files and is held in my singleton service-based class. It’s only ever deleted when the app shuts down.

Getting the GL state setup correctly was probably the most time consuming part. The library worked great in isolation, but when combined with the framework I was working with, there were all sorts of states I had to track down in order to render a flash file and still be able to render openGL correctly right after. Below is just the snippet of the code I wrapped around a flash movie’s display() method.

Somethings to note, it likes a blank projection matrix (had to reset it to an identity matrix) and you have to disable a few client states in order to render flash properly. Once, you leave the display() method you need reset your gl color, blend functions, and possibly any other states you have may used with the assumption that they are enabled:

const int w = GetDisplayWidth();
const int h = GetDisplayHeight();

_movie->set_display_viewport(0, 0, w, h);

SetGLColor(1, 1, 1, 1);


And with that, I’m off to do some more work on Outwitters. Adam and I just played a few games over lunch and I actually WON. Mark this day, for this is the first day I didn’t get completely rolled in my own game.

Wow, time flies when you’re busy. As seen on the OML site, we’ve announced we’re working on our next game. With that out of the bag, I’ll probably be blogging about the progress of it, obstacles I run into, and things that are helping us along with the development that might prove useful to other iOS devs. With that, the first major issue we’ve come across is trying to develop 2D assets for a game that is planned to run in 3 different resolutions.

The first step to working around this problem is having your art assets in vector form. Adam uses Illustrator and Flash to do pretty much all of our assets. This is great because we don’t lose any fidelity in assets when resizing them. The alternative for those that are doing raster-only would be to to develop assets at the highest res and shrink them.

When working with a ton of assets, a major pain point came up when we wanted to export a set of images for the different devices (iphone, iphone retina, ipad). The process was very manual:

  1. Create an action to resize our asset to the needed size. Save it.
  2. re-use the action on each individual asset to get the 3 different sizes.
  3. rinse-repeat anytime an asset has changed

The above process worked for us more or less for Tilt to Live because the assets were less numerous and not much changed. But it doesn’t scale. Any programmer will see that this should be automated. But then the problem is audience, as the user of this script/automation will most likely be the artist….who happens to be on windows. So what to do?

Are You Using Smart Objects?

During GDC we met up with the awesome Retro Dreamer guys, Gavin Bowman and Craig Sharpe, and had a discussion on asset generation while riding back to the airport. The idea of Adobe CS’s new-ish “smart object” feature came up in conversation, and was something we looked into recently. Another thing that I discovered was the whole world of scripting in Photoshop. Not actions recordings, but full on javascript/VB/AppleScript support. So I took those two ideas and ran with it, and we came up with a pretty basic asset pipeline for getting our assets ready to be used in the game.

Using smart objects has some pretty cool benefits, which you can read about here. The biggest benefit to us was that the assets weren’t rasterized until saved as PNGs. So whether we blew our assets up or shrank them, it made no difference in quality. This tiny aspect helped a bit later in our script. The other neat benefit of smart objects is being able to link them to allow you to update all your instances with a click. We don’t use this feature ourselves, but it’s something definitely worth knowing about. John Nack has a brief post about this feature if you want to know more (as well as a mention of an obscure PS feature called ‘PS variables’, which seems it could be rather useful). So how do we use smart objects? Adam does his magic in illustrator, then pastes an illustration into a psd file in photoshop as a smart object. This becomes a ‘master’ file of sorts.

Yummy Scripting in Photoshop

Now onto the scripting part. We could have used actions, but it didn’t really allow us much flexibility in where assets go, what they are named, all in 1 pass instead of having an action for each resolution output. I decided to use javascript, mainly because it works both on Mac and Windows, and I was a bit more comfortable using it. The script is rather straightforward in that it prompts for a directory selection, creates a ‘pngs’ subfolder in it, and then scans it for psd files. Any files it finds, it’ll then take the original psd with the smart object, save it out as ‘pngs/whatever_iPad.png’, shrink it by 60% and save as ‘pngs/whatever.png’, and finally blow it up by 200% and save it as ‘pngs/whatever@2x.png’. Nothing too mind blowing here. You can find the script in full at the bottom of this post if you wish to try it yourself.

So why not use something like ImageMagick since it’s a simple re-size?  A few reasons:

  1. It excels as a command line tool for the most part. command line sucks on windows, and if your artist isn’t comfortable using one, it’ll just be another pain point.
  2. Scripts that are native to photoshop show up under File->Scripts. Click it, and it runs. No prior setup. No ‘cd’ing into a current directory or typing in parameters, or trying to code up a custom UI. Very quick to knock together scripts.
  3. The biggest reason is the power these scripts grant you. You have access to a lot of internal data about the current file you’re working with in a very manageable way. We’ve  brainstormed some ways to help automate animation exports from any hand-drawn animations Adam has done by organizing the layers into groups and having the script run through the groups to get the necessary frames out (again, in all 3 resolutions).

So now with the click of a button and a folder selection I can generate the different asset sizes I need for the game. Still a bit tedious if you have to do it for each folder, yes? It’s trivial to add recursive folder searching, and is something I actually did. Now I can simply click our root project folder and regenerate all of the game’s asset in one go, without human error.

Just to show how simple it is to do resizing here is a snippet of the ‘meat’ of the script. ‘saveImage’ is just a 5 line function that outputs the image to a PNG:

var fileList = GetFiles(docFolder,myPSDFilter);
for(var i = 0; i < fileList.length; ++i) {
var file = fileList[i];
var docRef = open(file);
var docName = docRef.name;
var oldWidth = docRef.width;
var oldHeight = docRef.height;

var fileNameNoExt = docName.substr(0, docName.lastIndexOf(‘.’)) || docName;
// save out ipad 1x size
saveImage(pngFolder, fileNameNoExt+”_iPad.png”);

// save out 60% size for iphone
var iphoneW = Math.ceil(oldWidth * .6);
var iphoneH = Math.ceil(oldHeight * .6);
saveImage(pngFolder,fileNameNoExt +”.png”);

// save out double that size
var retinaW = iphoneW * 2;
var retinaH = iphoneH * 2;


Now I had made a point earlier about how using smart objects gives us more flexibility in the script, and the above script shows why. It was more straightforward to go from ipad to iphone (an arbitrary percentage) and then double whatever the iphone size is for retina, instead of calculating each separately. You wouldn’t be able to do that with rasterized images. Another nuance in there is the use of Math.ceil(). When shrinking by a fraction you can end up with non-whole number dimensions, and photoshop truncates the number. This can lead to weird retina resolutions that don’t have even-numbered dimensions (one pixel off).

No Vector Graphics? No Problem!

Now even those people that are using raster graphics aren’t completely left out in the woods either if they want to shrink then grow their assets. For our animation script I had to deal with raster-only graphics but I wanted to keep the same “shrink to iphone then double” procedure. The problem would be that going from iphone to retina would cause blurry graphics. Not so, if you look into using historyStates in your scripts! Here’s the same basic procedure as above except used in raster-only graphics:

var docRef = open(file);
var docName = docRef.name;
var oldWidth = docRef.width;
var oldHeight = docRef.height;

var origState = docRef.activeHistoryState;
// save out ipad 1x size
var ipadW = Math.ceil(oldWidth * .5);
var ipadH = Math.ceil(oldHeight * .5);
saveImage(ipadFolder, docName);

// save out 60% size
var iphoneW = Math.ceil(ipadW * .6);
var iphoneH = Math.ceil(ipadH * .6);

docRef.activeHistoryState = origState;
// save out double that size
var retinaW = iphoneW * 2;
var retinaH = iphoneH * 2;


Notice in the last resizing batch I set the docRef’s activeHistory state to the original State. This is effectively automating an ‘undo’. I reset the graphic to be the super high-res ipad2X size and then shrink down from there again. No blurry graphics! Hurray!

I’m planning on looking into scripting possibilities in flash as well, as automating our exporting for that would be a huge help there too. But for the time being those are exported as a png sequence, and then another photoshop script goes in and organizes the 3 different sizes in 3 different sub-folders. Nifty, eh?

The last step in this pipeline is creating texture atlases, and is something I’ll be automating as well in a couple weeks time possibly using Texture Packer’s command line interface.

Here is a download link to the script in full in case you want to try it out, change it, tailor it to your needs, go wild:


I had an issue where several international users were reporting that the game was crashing. As it turns out, using an NSNumberFormatter would return a weird character to display as the grouping separator. This would be all good and dandy, but the problem lies in the fact that I’m using bitmap fonts to render, and the look-up failed causing a hard crash.

In Tilt to Live’s case, this was in regards to how number grouping symbols are displayed in different locales. I use the comma to separate most of the number displays. Due to my ignorance, I just assumed ‘NSNumberFormatter’ would work with what I thought was a universal concept. Not so in some Arabic countries apparently. The fix? If you aren’t supporting localization just use the particular localization you are coding for, otherwise the formatter will use the iPhone’s language settings to pick a localization for you:

numberFormatter = [[NSNumberFormatter alloc]init];
NSLocale *locale = [[NSLocale alloc]initWithLocaleIdentifier:@”en-US”];
[numberFormatter setLocale:locale];
[numberFormatter setNumberStyle:kCFNumberFormatterDecimalStyle];
[numberFormatter setGroupingSeparator:@”,”];
[numberFormatter setUsesGroupingSeparator:YES];
[locale release];
locale = nil;

I spent this past weekend getting retina display graphics implemented in Tilt to Live for iPhone 4. The cynic in me hates the marketing term ‘retina graphics’ because on the production side, it’s simply just a bigger image. But I guess you do need a name for it, and since ‘HD’ has already been bastardized on the iPad I guess this will do. I decided to write this post to bring a bit of awareness to some of the process of updating your OpenGL game for the retina displays if you choose to do so. I thought it’d be riddled with lots of coding work-arounds and such since our game is a 2D game with specifically sized sprites based off the resolution, but it turned out to be pretty quick and easy.

Anyway, Adam had re-exported all of Tilt to Live’s graphics at 2 times the size so the hard part was pretty much done. It was made much easier by the fact that he was using Illustrator and vector graphics, so there was zero need to redraw or redo any assets. On the coding side of things, I feared I’d be spending hours re-tweaking game code because a lot of constraints and boundaries were hardwired to image width’s and heights (bad I know). But in the end, I luckily avoided all of that. Most of my sprite rendering code is encapsulated in a SpriteImage class with width/height properties. I made a simple modification to those properties where the returned width or height was multiplied by a ‘renderScale’ variable that represents the scale of the display. In effect, the width/heights would return whatever the game was designed for at the 480×320 resolution, but under the hood would be using a larger texture.

For non-openGL apps, they get a few things for free on the retina display such as font rendering and UI scaling, but how do you set up your graphics window to support retina graphics? A quick look over at the apple docs shows how to setup your OGL context to scale by a factor of 2. When it came down to implementing it, just doing:

self.contentScaleFactor = DisplayScale();
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);;

//..continue setting up framebuffer

…got me rendering 4 times as many pixels. The game still looked exactly the same since I hadn’t replaced the graphics yet. What is DisplayScale()? It’s simply a wrapper I wrote:

CGFloat DisplayScale()
return [UIScreen mainScreen].scale;

Now if you’re using UIImage to load your textures then you’re in luck for updating graphics. Adding a file with the ‘@2x’ suffix will automatically be loaded on an iPhone 4 instead of the normal resolution. For example, myImage.png and myImage@2x.png live side-by-side in your bundle, but on the iPhone 4, it will load the @2x one. I had been using libpng directly to load textures a while ago, but opted to revert back to UIImage for performance reasons.

Now this is all well and good if you’re using separate png’s for your images. But to improve rendering speed and memory consumption a lot of times one will use spritesheets. Having a @2x sprite sheet is nice if the spritesheet generator you are using puts them in the same relative positions. As for me, there were a lot of unused sprites from previous versions so I took the time to clean up things. This was a bit time consuming as I had to verify which images were still being used and which ones weren’t. The ‘@2x’ trick only works on UIImage, so if you have any supporting files (font descriptions in particular) that are needed in conjunction with the graphics file you’ll probably have to write your own ‘@2x’ searching and loading for those particular assets, which is no biggie either:

NSString *path = nil;
NSString *resource2x = [fontName stringByAppendingString:@”@2x”];
path = [[NSBundle mainBundle] pathForResource:resource2x ofType:@”fnt”];

if(path == nil)
path = [[NSBundle mainBundle] pathForResource:fontName ofType:@”fnt”];

As for other caveats? Fonts were a big one depending on how you export and render them. I’ve been using BMFont to do all my font exporting. It’s a great tool but a little ambiguous when it comes to font sizes. There’s a ‘size’ field, but it refers to character height. Does doing a doubling of character height garuantee exactly twice the width/height]? What about the option for ‘line height’ [unchecked character height? I ended up using line height, since that’s the option I used initially when making the fonts. It worked pretty well. I tweaked my bitmap font rendering a bit and it supported retina graphics shortly after without having to change game or UI code. There was a slight issue with some lines of text being rendered 3-4 pixels lower, but I didn’t spend too much time investigating it as the issue isn’t noticeable in normal use.

Another caveat has to do with rendering GLPoints. Be sure to scale GLPoints when setting glPointSize() if you’re using that functionality at all. Tilt to Live uses it extensively to render enemies faster instead of quads.

So I’m using OpenAL to do the audio in Tilt to Live. Over the course of development audio became the bottleneck of my game, both in size and in performance. The following could help you if you’re experience audio performance issues and are somewhat new to OpenAL. Let me preface this with: Don’t blindly optimize. Measure, Measure, MEASURE! Know what your bottleneck is before trying to tune code!

  1. Don’t make more than 32 sources objects in OpenAL. Of course, this number may vary from device to device and what generation the device is. Making any more than the limit the device support makes openAL fail silently, and at this point you’re wasting cycles.
  2. Load your audio buffers and create your source objects ahead of time and re-use them. This is a big one. Don’t generate and delete source objects in the middle of your game update. Instead, it’s much faster to just grab a ‘free’ source that is not playing a sound any longer and attach it to another buffer. I was pre-loading my audio-buffers, but creating/deleting sources on the fly in Tilt to Live. Then I started ‘pooling” and I got a decent gain out of re-using source objects.
  3. Keep OpenAL calls to a minimum. Especially in tight update loops, don’t repeatedly call openAL functions that don’t change frame-to-frame. I found that a good portion of my time was spent doing something as simple as ‘alSourcei()’ on each source ID per frame was causing a significant slow down.
  4. Don’t query openAL state if you don’t have to. In my case, I wrapped openAL Sources in objects with properties to get volume,pitch, etc. Initially those properties simply called the openAL equivalent and returning it instantly. This was hurting my frames due to some some innocent looking “backgroundMusic.volume += someVal” happening each frame along with other audio sources doing the same thing. Save any state you can in instance variables, and as a last resort hit openAL when you need to.
  5. As for size, you should consider compressing your sound FX and music to a reasonable size. If you’re like me, you hate giving up quality; especially if you listen to the full quality one and then the compressed one. It can seem like night and day. But in reality, when your users won’t have a full quality audio file to compare it to, they will not notice the difference.

As a sidenote, you can look at my first dev tip for batch compressing audio files.

We’re extremely close to submitting our first title “Tilt to Live“. We plan on submitting it this upcoming Friday. After a long and grueling (but awesome fun!) development cycle we’ve learned a ton. I figured an interesting way to distill our new found knowledge it would be in very short “dev tips” for developing an iPhone game. Today I start out with audio:

Compressing Audio

By the end of development our game weighed in at 16 MB. We wanted to go below the 10MB limit so users could download our game over-the-air. Quickest solution? Compress the audio. Now, we were already using some compression but we have over 60 audio files! The main crux of the problem is we want to be able to quickly compress all our audio to a certain format, build it, look at final build size, and test for quality. For this I wrote a quick script:

rm -rf caff
mkdir caff
for i in *.wav
afconvert -f caff -d ima4@22000 -c 1 $i caff/${i%%.*}.caf


Now for a little explanation on my setup:

  • I have a single folder (let’s call it ‘finalAudio’) with all the original, uncompressed wave files used in our game.
  • The script sits inside this ‘finalAudio’ folder as well and I run ‘chmod +x build_audio’ where ‘build_audio’ is the name of my script so I can just click on it in Finder when I’m wanting to re-export my audio.

What the script does:

  1. It removes any folder/file named ‘caff’ in the current directory
  2. it makes a new directory inside ‘finalAudio’ called ‘caff’. This is where the compressed audio will go
  3. It looks through all wav files inside ‘finalAudio’ and runs the ‘afconvert’ utility on each file and puts the resulting compressed file in ‘caff’.

I’m not going to go into the details of all the options you have with afconvert and what audio formats the iPhone supports. There’s plenty of documentation online covering that already.

Just an interesting sidenote: we took our 13-16MB game and shrunk it to 6.5 MB using ima4@22000 with 1 channel.

I finally got rid of the stupid…

"ld: warning in <foo>.so, file is not of required architecture."

…warning when using iSimulate. If you’re not using iSimulate or a similar technology you’re losing valuable time even if you don’t use the accelerometer or GPS functions. Then again, I might not be the best spokesperson on time since I spent the weekend re-inventing the wheel.


I have this need to make sure my projects compile w/o warnings. I let it sit for a while because I’m not an XCode expert so I knew I wanted to conditionally link it in but I couldn’t find the UI for it. The process in which they suggest you integrate iSimulate into your XCode project truly is easy, but I felt icky when it came up with warnings when you compile against the actual device.  There are better ways of doing that. Namely, conditional build settings. Of course, it requires a few more steps and a bit more knowledge of the XCode environment so I suspect that wouldn’t help their marketing. Regardless, having the iSimulate linker flags and library search paths only be added when you’re compiling for a simulator is relatively easy to setup.

It’s getting around that time when I’m spending more time working on the game as I realize how many little things need to be fixed, added, tweaked. I’ve recently given up most of my weekends now in an effort to get more hours in. I’m currently working 6 out of the 7 days a week on the game in some way. It’s mostly game programming, sometimes graphics, web development, emailing, phone calls, or getting business forms and other legal stuff squared away.

This week was a rather productive week on several fronts:

  • Got a lot of new art assets from Adam and implemented them
  • Feedback from testers started trickling in and the reaction has been positive along with tons of great suggestions/ideas
  • Fixed a lot of old bugs that were sitting in the queue
  • Improved some memory usage of textures

The week wasn’t without it’s headaches though. This completion date graph illustrates just how badly I estimated this week’s tasks:

Bad Estimates Ahoy!

The amount of tasks was unusually large for the past week, but the amount of tasks wasn’t as big an issue as some tasks taking 10 times longer than I wanted.

The biggest one was getting multi-threaded texture loading on the iPhone working. The hours I logged into getting it working is my own personal testament to how a seemingly simple task can blow up to ridiculous portions when you mention the word “multi-threading”. Had I known it’d take this long I would’ve opted for a more basic solution and leave it to a side project. The bright side is now I can load audio and graphics in parallel along with the main thread being able to continue any animations and such. It took about 20 hours to get it working and debugged, which is somewhat embarrassing haha!

Hair-pulling Threading Issues

It started with me wanting to off load the texture loading to another thread. Simple enough. I’ve done this before on other projects. It took a couple of hours to learn how to get OpenGL setup correctly to allow this. Then I discovered OpenGL wasn’t being the meanie, but a function call from core graphics wasn’t allowing me to call it from a background thread (sometimes…ugh). I researched it for a couple of days while finishing up other things, but wasn’t able to find anything conclusive. I then decided to take the leap and implement my own PNG loader using libpng as a base.  I mean really…how hard could it be? Hah…hah…ha…

Saying it was frustrating would be the understatement of the year, and last year. I usually try to avoid copy/pasting code as I like to read and understand the API I’m using so when it DOES break I’m not utterly clueless. Libpng’s interface is rather low-level and not exactly the cleanest. After reading documentation on it for what seemed forever I decided to start with an example program and modify it to fit my needs.

Let’s not forget that I’m an XCode/Unix newbie so getting libpng/zlib working as a static library in XCode was a nightmare for me. I ended up building from the source against the correct config’s and SDK’s for the iPhone to avoid the “invalid architecture” warnings. Next came actually implementing the loading. It looked pretty straight forward in all the examples I saw, except I wanted one feature I didn’t see in any of the sample code: be able to load non-power-of-2 textures onto the iPhone. All of the game’s assets are of various shapes and sizes, and several are non-power of 2. I haven’t taken the time to do proper texture atlasing and I wanted the ability to quickly drop any assets Adam sent me into the game’s folder and load it without fiddling with size alignments and the other mess.

This is where things started to get hairy because I was seeing issues crop up that weren’t necessarily from my code so I was lead astray a lot. The caveat is the iPhone has a rather special way of “compressing” PNGs. It took me a while to understand the implications of this because I could load the PNGs fine but holy shit were they rendering wrong in OpenGL! It finally dawned on me to change the blend function OpenGL was using. Much to my surprise I couldn’t find one that would give me the equivalent alpha blending that I got with premultiplied alpha’s. And I wasn’t about to go back and re-design all the assets. By this time I was pretty comfortable working with libpng and I decided to just write a custom transform function to premultiply the alpha when loading it to mimic the previous technique of loading:

Finally, you can write your own transformation function if none of the existing ones meets your needs. This is done by setting a callback with


You must supply the function

void read_transform_fn(png_ptr ptr, row_info_ptr
       row_info, png_bytep data)

See pngtest.c for a working example. Your function will be called after all of the other transformations have been processed.

Alrighty, so now I can load things ~20 hours later. Talk about a huge step backwards. Doing the pre-multiplication at runtime is a trade off of convenience over speed. When it comes time to release the game I might consider doing this prior to compiling so the assets are “cooked”. But doing it at runtime adds about 1-1.5 seconds from my unscientific tests.

What’s amusing was once this was working it took 15 minutes to get multi-threading working. Now I decided to be adventurous and load the audio and graphics on separate threads from the main thread, for a total of 3. Speed tests show it’s on average no faster or slower than having both asset types be loaded in order on a single thread *Shrug*. Oh well, if the iPhone ever gets another core then I’ll be set ;). Ultimately, I now have the freedom to toss asset loading on a background thread while the game continues playing with minimal hit to the frames per second.

This  kind of thing is probably a bit advanced and overkill to do for such a simple and small game as Tilt To Live. If I have time (unlikely) over the next several weeks I’ll try to put together a complete code listing for those looking to have more control of their asset loading with libpng on the iPhone.

Tilt To LiveWhen it comes to gui coding I don’t typically find it the most glamorous things to do. Particularly when dealing with games. You are left with writing a lot of boiler plate GUI code that is tightly integrated with your rendering pipeline. And even when I do find gui libraries for games they tend to try to emulate the OS in look and feel. When you want something lightweight and very interactive/animated you typically have to write it and integrate it yourself. If someone knows of a library for games that allows for versatile GUI development please leave a link in the comments :). When left unchecked, GUI code can end up being:

  1. messy and hard to maintain
  2. difficult to change
  3. rigid (basically just “functional” with very minimal style)

The GUI is one of the few remaining “big” items left on Tilt To Live’s to-do list. When imagining the GUI I had all sorts of crazy ideas for animations, transitions, interactions. Despite it being a relatively minor part of the game’s experience I still felt it important to really make it as polished as possible. Why? The GUI is the first thing a new user sees when launching a game (unless you’re Jonathon Blow’s Braid). It’s important to grab the user’s attention as soon as you can to make sure they give your game a fair shake.

Attention! Over Here! Look at Me!

In the mobile market you are fighting for pretty small attention spans. One of the design goals for Tilt To Live is to make it quick to play. Currently it takes one tap to start playing the game. I’m still working on reducing the initial load time. I even contemplated removing the requirement of ‘one tap’ to play, and to just drop the player into the game. When someone selects ‘Tilt To Live’ on the iPhone there’s a large chance they are simply wanting to play a quick game of Tilt To Live. In fact, other games like Spider: The Secret of Bryce Manor do just that and it’s actually quiet welcome. My only reservation is Tilt To Live may expand in the future to multiple game types so the ‘play from launch’ approach isn’t ideal if the desired game mode isn’t known with great certainty.

Either way, trying to get that good first impression on the user is very important. In addition to that, reducing the friction required to play your mobile game could possibly raise the game up beyond “I have nothing better to do” status and overtake other forms of gaming.


So back to GUI animations. Throughout the game I’ve used a hacked-together curve editing tool I wrote in Blitzmax to produce animations for items spawning, enemies popping up, etc. It works “well enough”, but the barrier to creating a new ‘curve’ was still high and it added to the total filesize of the game and was another asset to add to the app. So I revisited easing functions. I use some throughout the GUI system, but they aren’t very interesting. I started looking into alternatives and ended up with a rather nice solution.

Below is a work in progess video of the new easing functions in action on Tilt To Live’s menu UI:

A couple of years ago, Shawn Hargreaves had a really nice article explaining curve functions and how to use them in the context of GUI transitions using the XNA framework. It was a really nice primer to getting your hands wet with polynomial functions. Fast forward to today, I still have the basic “exponential” and “sigmoid” functions, but they were pretty boring in the scheme of things. Tilt To Live’s aesthetic is kind of cartoony so those functions weren’t very lively.

Having worked with Flash many years ago I knew it’s easing animation capabilities were very sophisticated, but I wanted to have some subset of that flexibility in my code as well, so off to Google I went. The best resource I found was Timothée Groleau’s Easing Function Generator. Not only did it have a nice set of preset easing functions, it generated the code to produce those exact curves! Wow, this was a huge time saver! For anyone that wishes to use these functions as well, the variables used in the function probably require a quick explanation:

  • t – the floating point time value from 0 to 1 (inclusive). If you want the position on the curve half way through the transition then you pass ‘0.5’ through t.
  • b – the starting position. In a one dimensional setting this is the value you would get if t = 0.
  • c – the ‘change’ in position. So if you want to transition from 34 to 56 then c = (56-34) = 13. (more on this later)
  • d – duration of the transition. If you want the transition to last, for example, 2 seconds then d= 2.

With that actionscript function generator it’s pretty trivial to implement it in any other language (as in my case, C). Although my GUI transitions aren’t based off of changes in position, but off of start and end positions. So my typical transition function signature looks like Transition(startPosition, endPosition, delta). Below is an example implementation in C:

float EaseOutElastic(float startVal, float endVal, float blend)
float diff = endVal – startVal;
float tsquared = blend * blend;
float tcubed = tsquared * blend;
return startVal + diff*(33*tcubed*tsquared – 106*tsquared*tsquared + 126*tcubed – 67*tsquared + 15*blend);

The result:

  • Much more interesting GUI transitions
  • No extra assets as curves are procedural
  • Can re-use curves in different types of animations (position, scaling, rotation, etc)

When I’m doing a polish phase on the GUI of a game or animations, sounds, etc I’m often reminded of the tidbit from Kyle Gabler’s  Gamasutra article about prototyping: Make it Juicy. Using easing functions to add a bit of bounciness, juiciness, and overall personality to a game can really help make the game feel more alive…feel more…juicy.

Tilt To Live

I’ve had this issue in past 2D games all the time. The ability to interpolate 2D rotations for basic animations for anyone that doesn’t understand quaternions becomes a rather laborious task. I’ve done messy if-then statements to try to catch all the cases where rotation hits the ‘360 to 0’ boundary and it’s always ugly and seemed like there had to be a more mathematically correct way to interpolate a circular value.

Well for anyone that sticks with it long enough, they undoubtedly run into quaternions. What are quaternions? I’m not about to explain it in detail, but I like to think of them as a way to express a 3D rotations and not having that annoying gimbal lock problem.

So how do quaternions help our 2D rotation needs? Well if you’re in a situation where you need to interpolate between the shortest path between 2 angles then Quaternions can help. You have a few choices to solve this problem:

  1. Use if-then clauses to try to catch all the cases when you hit a rotational boundary if you are rotating from, for example, 300 degrees to 2 degrees. You would want to rotate all the way through 360 instead of going backwards.
  2. Use quaternions, which are more applicable to 3D rotations, to interpolate between to angles using spherical interpolation (slerp for short).
  3. Use spinors, which are a very similar concept to quaternions, except they are more catered to 2D rotations.

You’ll eventually see that using quaternions is somewhat overdoing it as 2 axises are never used in any calculation in 2D. If you eliminate these 2 axises from the quaternion you end up with a spinor! Thanks to Frank, of Chaotic Box, who helped lead me down that direction for my own iPhone game.

To help test out the concept and make sure it’s working I ended up writing a Blitzmax quickie app using what little I know about quaternions. I’m certainly a bit iffy when it comes to dealing with complex numbers, as I don’t fully grok them as I do normal vector math, but I make do. So if anyone reads this post and sees some sort of logical error in my code, by all means leave a comment and I’ll improve the listing.

So to get started I needed a way to represent a 2D rotation as a spinor. So I created a spinor type with some basic math functions (all of them analogous to how quaternions work):

Type Spinor
Field real:Float
Field complex:Float

Function CreateWithAngle:Spinor(angle:Float)
Local s:Spinor = New Spinor
s.real = Cos(angle)
s.complex = Sin(angle)
Return s
End Function

Function Create:Spinor(realPart:Float, complexpart:Float)
Local s:Spinor = New Spinor
s.complex = complexPart
s.real = realPart
Return s
End Function

Method GetScale:Spinor(t:Float)
Return Spinor.Create(real * t, complex * t)
End Method

Method GetInvert:Spinor()
Local s:Spinor = Spinor.Create(real, -complex)
Return s.GetScale(s.GetLengthSquared())
End Method

Method GetAdd:Spinor(other:Spinor)
Return Spinor.Create(real + other.real, complex + other.complex)
End Method

Method GetLength:Float()
Return Sqr(real * real + complex * complex)
End Method

Method GetLengthSquared:Float()
Return (real * real + complex * complex)
End Method

Method GetMultiply:Spinor(other:Spinor)
Return Spinor.Create(real * other.real – complex * other.complex, real * other.complex + complex * other.real)
End Method

Method GetNormalized:Spinor()
Local length:Float = GetLength()
Return Spinor.Create(real / length, complex / length)
End Method

Method GetAngle:Float()
Return ATan2(complex, real) * 2
End Method

Function Lerp:Spinor(startVal:Spinor, endVal:Spinor, t:Float)
Return startVal.GetScale(1 – t).GetAdd(endVal.GetScale(t)).GetNormalized()
End Function

Function Slerp:Spinor(from:Spinor, dest:Spinor, t:Float)
Local tr:Float
Local tc:Float
Local omega:Float, cosom:Float, sinom:Float, scale0:Float, scale1:Float

‘calc cosine
cosom = from.real * dest.real + from.complex * dest.complex

‘adjust signs
If (cosom < 0) Then cosom = -cosom tc = -dest.complex tr = -dest.real Else tc = dest.complex tr = dest.real End If ' coefficients If (1 - cosom) > 0.001 Then ‘threshold, use linear interp if too close
omega = ACos(cosom)
sinom = Sin(omega)
scale0 = Sin((1 – t) * omega) / sinom
scale1 = Sin(t * omega) / sinom
scale0 = 1 – t
scale1 = t
End If

‘ calc final
Local res:Spinor = Spinor.Create(0, 0)
res.complex = scale0 * from.complex + scale1 * tc
res.real = scale0 * from.real + scale1 * tr
Return res
End Function

End Type

One caveat in the above code is that any angle from 0-360 is mapped to 0-180 in the Spinor. What does this mean? it means if I want to represent the angle 270 I need to divide it by 2, creating the spinor with the angle 135. Now when I call GetAngle() it will return 270. This allows us to smoothly rotate the whole 360 degrees correctly. This is explained here in more detail.

So now I want to spherically interpolate (slerp) between 2 2D angles. Well, if I call Slerp() on the spinor type it’ll do just that, giving me another spinor that sits in between the 2 angles. I’ve read a couple of articles in the past on slerp and how to implement it, but in order to get things done I just used a publically listed code snippet and ported it to blitzmax. That whole article is worth the read and recommended, even if you’re just using 2D.

Now that the concept of spinors is encapsulated in the Spinor type I wanted just a basic convenience function that I can feed 2 angles and get another angle out:

Function Slerp2D:Float(fromAngle:Float, toAngle:Float, t:Float)
Local from:Spinor = Spinor.Create(Cos(fromangle / 2), Sin(fromangle / 2))
Local toSpinor:Spinor = Spinor.Create(Cos(toAngle / 2), Sin(toAngle / 2))
Return Spinor.Slerp(from, toSpinor, t).GetAngle()
End Function
the ‘t’ variable is the time variable from 0 to 1 that determines how far to interpolate between the 2 angles. Giving it a value of 1 means you will get toAngle out, and a value of 0 is equal to fromAngle and anything in-between is just that…in-between.

Now I can call Slerp2D(300, 10, 0.5) And get the correct angle without worrying if I’m interpolating in the right direction :).

Also, I want to point out that the above code was just used in a quick proof of concept app. It’s somewhat sloppy and to be honest, I’m not sure I named the portions of the spinor correctly (real,complex), but it works. For clarity and to save time I didn’t inline the Slerp2D functions or the Spinor type methods, so it generates a lot of garbage. You would need to optimize this to use it in a tight loop somewhere. Any suggestions are welcomed.

As for references, I did some searching and got most of the ‘missing link’ materials I needed to bridge the gap between quaternions and spinors from: