Sunday, 21 December 2008

Terminology: Ensure

What's in a name? Well, quite a lot actually. At best, a poorly chosen name gives little or no insight into the nature of thing it is naming, and at worst the name can be misleading and actually send you down the wrong path entirely. Conversely, well chosen names can convey the meaning, and intent for a variable, method or project.

There's plenty of literature available on the topic, so I'm not going to belabour the point any more, other than to give a couple of examples of names that I try to use consistently in my projects. I'll add other examples as and when I remember them!

The first, I've already mentioned in a previous post, and that name is "Scaffold". To me, it means "something I've put up as a temporary structure, that isn't intended to be there when the final product is released".

The second name is used when I want to get a reference to something that may not have been created yet.

If I was going to create a foo object, I'd probably use "CreateFoo". But I can't use "Create" because the object might already exist.

If I was going to retrieve a foo object, I might use "GetFoo". But again, I can't use "Get" because the object might not have been created yet.

So the term I use in this situation is "Ensure". I think "EnsureFoo" conveys the meaning and is distinct from "Create" and "Get", and doesn't mislead.

Saturday, 20 December 2008

Delete the font

Originally, I could read every website that was thrown at me.

Then, ClearType arrived, and I had the option of "improving" the readability on my LCD screen. I tried ClearType and decided I prefered my text without ClearType, so turned it off. Things were still fine.

But then, sites started using fonts that seemed to rely on ClearType to be clear enough to read. The fonts I struggled with were mainly Calibri, Segoe and Vera.

(I won't bother posting screenshots, because you won't be able to see them as they appear on my monitor.)

I tried to find ways to disable certain fonts on websites, replace font X for Y, make a global stylesheet change, but in the end, I found the way to make readable the sites that use those fonts, was simply to delete the fonts that I struggled to read. :)

Thursday, 18 December 2008


I wonder if there's a website where you can post your ideas for coding projects. I had a (very) brief search, but didn't find anything so far.

Various software ideas pop into my head from time to time.

Here's one of them:

Connecting people via a database correlating hobbies with people.

By chance I found out that a colleague read some of the same books as me - we had a bit of a chat about the books and it made a nice change to talking about work. I wondered how many other colleagues read the same books as me. So originally the database idea was just for work. Then I thought it could easily be expanded to allow people in local neighourhoods to find people with similar interests. Then I thought it could easily be expanded to be a global database.

The trouble would be getting people interested enough to fill in their details. Writing the database and the interface would be easy by comparison!

Wednesday, 17 December 2008

More Customizations

Two more customizations - this time relating to the system notification area.

When working on a new PC, sooner or later I end up installing 4t Tray Minimizer. I like to keep a clear taskbar, so if I have several programs open at once, and I'm not actively working on something, but don't want to close it, I can right-click the minimize button and it turns into an icon in the system notification area.

Secondly, when I have too many icons in the notification area, I like to decide which icons don't need to be there at all. For the trial period, I had PS Tray Factory installed, which mostly did what I expected of it, though it did seem to forget my settings from time to time (although that could have been one of the trial limitations).

I'd quite like to have a go at writing my own program that manipulates the icons in the notification area. When I have time. One day. :)

Tuesday, 16 December 2008

Conversation Context

Why are mobile phones so distracting to the point where over 30 countries have made it illegal to drive while operating a mobile phone. If having a conversation with someone over the phone is so distracting, why isn't it illegal to chat to a passenger in your car?

In England, at least, it's legal to use a mobile phone while driving, as long as you are operating the phone "hands-free". Personally, however, I wouldn't feel safe operating a mobile phone while driving, regardless of whether or not I could use the phone and keep both hands on the wheel.

The reason is to do with context, and it doesn't just apply to using a mobile phone in a car - I have the same issue with landlines and three-way-conversations involving somebody in the room with me.

The trouble is, in both examples, I'm the piggy-in-the-middle, but neither of the other "players" communicates with each other directly.

In the landline example, the other players are obviously the person on the other end of the line and the person in the same room as me. In the mobile example, one player is the person on the mobile, while the other player is the environment in which I'm driving my car.

If somebody was sat in the car with me, and the lights turned green, they could see that I was concentrating and hang fire with the conversation until they could see that there was less demand for my attention. But the person on the other end of the mobile can't see the road, and know when to pause, so will carry on talking and distracting me, without realising.

I'm sure there's an analogy to programming here, but for the life of me I can't see it... :-S

Monday, 15 December 2008


Ever find yourself commenting out lines of code, or adding dummy values, or setting up variables just to test something, then once the test has finished, going back through your code removing the comments, commenting out the dummy values and changing the variables back? And then repeating that sequence when you find that you hadn't actually finished testing?

I do.

Steve McConnell calls this code "scaffolding code".

I also have a tendency to forget to remove this code once I've fixed my bug or finished my test. So I've gotten into the habit of marking the change with a comment similar to

// Scaffold: added dummy value
int dummy = 3;

Sometimes my scaffold comments get lost among a sea of other code, and I forget to remove it. So to group it together, I've started using a single scaffold file / class. That way there's only one place to look for the scaffold code.

That's still susceptible to me forgetting about the scaffold before publishing my program, so I think the next step might be to create a user defined scaffold file that can be edited and stored locally, so even if I turn on all my scaffold constructs, nobody else sees them.

Wednesday, 29 October 2008

CloudLauncher update

Well, after a brief hiatus, I'm back onto CloudLauncher.

Just in case you don't know what it is, it's a launcher app. Drag your files / folders / executables / shortcuts onto CloudLauncher - it creates a shortcut to your file. Double-click the shortcut - your file opens. Simple.

So how is it any different to windows explorer?

Well, the main reason I wrote cloud launcher was because windows forgets my icon positions. In fact, this is probably the only reason to use CloudLauncher. For now.

Version 0.1 is available here.


Add option to delete shortcuts!
Add option to line up icons.
Add option to edit the target of the shortcut.
Add "F2" to rename shortcuts.
Add option to choose (or disable) the hotkey.

Bug: Copy the name of the shortcut instead of the name of the target!
Bug: Don't crash if the global hotkey is in use!

Update: 29th October 2008
Added libraries to the zip file so that it compiles!

Tuesday, 28 October 2008

Minimize to the Notification Area

Cloud Launcher is minimizing to the notification area, then restoring when the notification icon is double clicked.

Only, for hours it wasn't. Instead of restoring, it was appearing as a tiny "minimized" window / title bar.

Turns out the "restore" / "show" order is important.


WindowState = FormWindowState.Normal;

doesn't work.


WindowState = FormWindowState.Normal;


Wednesday, 8 October 2008


I once walked in on a friend who was chuckling to himself as he finished writing an email. Asking him what he found funny, he told me he'd just realised that it would be very easy, when signing an email with "regards" to miss the 'g' and hit 't' instead.

For some reason, my fingers refuse to type certain combinations.

"tion" always comes out "tino", as in "combinatino", "productino" and "sectino"
"updates" keeps coming out as "udpates"
"cheers" as "cjeers"
"installed" as "isntalled"
and even
"Tim" as "Time".

Do you have any amusing typos?

Monday, 6 October 2008


My first code project article:

AppendDate appends the current date/time to a file. I find it useful for achiving my work.

How exciting :)

Tuesday, 23 September 2008

I'll take two

I'm chasing those ideals again. The Right Thing to do is to separate out your business logic from you interface. I read one piece of advice which stated that if you want to know if you've succeeded in separating out those responsibilities, it's the same answer to the question "is my app skinnable?"

I sometimes find it difficult judging where the line should be drawn between the UI and the underlying data. So my idea (which is probably similar to the skinnable question) is to write two applications side by side, that each access the underlying business code. If I've done the job correctly it should be obvious (or at least easier to see) where the shared data begins and the custom UI ends.

Similarly, while I'm writing my next game engine, I'll try to write two (simple) games with the engine simultaneously. This way I can (hopefully) pull all the common code from both games into the engine. It helps me know where to draw the line.

Saturday, 20 September 2008

Set in Stone

Many years ago, when I didn't know better, I wrote some games, partly as a learning exercise, partly to put in my portfolio, partly for fun. They were a Tetris clone (Tetratim) and a breakout clone (Bounce). But it didn't matter that they were clones because the main reason I wrote them was they gave me the experience of taking a game all the way from start to finish.

Tetratim was the first complete game I ever wrote. It has a start screen, an in-game screen, a game-over screen, and a high score table. Apart from the high scores, all of the data was hard coded into the game-code. (I'll come back to this point in a minute).

Bounce took things a little bit further. It has several UI screens at the front, and included sound effects. The levels were designed in an editor that I wrote. I'd started learning to separate out my code and my data.

Even though I didn't stop programming after that, the game writing dried up. Not through lack of enthusiasm about the projects - I had plenty of exciting ideas that I wanted to implement. No, the reason the writing dried up was because I had become obsessed with ideals. I didn't want to hard-code data for another game. I didn't want to take short-cuts in my code. I wanted to do everything the Right Way.

So I spent lots and lots of time learning the correct way to do things, writing lots of little test programs, trying out new techniques and effects, and storing them all up for later. But for several years I didn't come up with anything that was as near finished as Tetratim or Bounce.

Fast-forward a few years to when I was preparing to enter the games-industry. Suddenly I had an urgent need to have demo material - something that showed that I actually did know how to program games! Urgency is an excellent motivator. Within the space of a few days I threw together a demo of a 3d car driving around on a surface.

A lot of the demo wasn't done the Right Way. The worst offence (in my eyes) was that once again, I'd hard-coded my data into my program. But it didn't matter. It was a neatly packaged, little demo. And that demo got me my job in the games industry.

At first I spent a lot of time programming in my spare time as well as at work, but I was still chasing the ideal. I wanted to do things the Right Way. Again a dearth of completed projects.

Which leads me to now. I have epiphanies from time to time. One of my most recent was "Nothing is set in stone."

It doesn't matter if it isn't written perfectly first time - get something written, and you can always change it later if it isn't correct. Make things modular, if at all possible, then if something is wrong you can "unplug" it and plug something else in in its place.

Ok, so don't dive in straight away - some planning is always good, but don't agonize over the fine details to the detriment of project. That's what I did in between my demo games and it meant I didn't get anything completed!

Sunday, 7 September 2008

SMS to Email

Found the email address to send emails to my mobile.

Now I can get sms alerts when I receive an email.

The tape fell off

I was getting a stream of exceptions in my output window when doing a drag-drop. They didn't seem to be affecting my program; the exceptions were in windows code (rather than my code); they weren't stopping my program running - just appearing in my debug output window.

I googled for the exception. I couldn't find what was causing them, or how to stop them occuring. But I did find a post explaining how to stop them appearing in my output window (right click the output window, untick "Exception Messages").

Obviously it would need resolving eventually, but this worked as a stop-gap solution.

Trouble is, the "solution" reminds me of an episode of the Simpsons...

Lisa: Hey dad, that light says check engine...
Homer: Uh oh... the tape must have fallen off.... there, problem solved (engine stops)

Thursday, 4 September 2008


I tried out google chrome.

It's... interesting.

But I'm not going to use it for now because:

* The title bar doesn't fit in with my other windows.
* If I close the last tab, chrome "helpfully" closes too.
* No adblock.

Tuesday, 26 August 2008


I've always been a little superstitious when it comes to computers. After all, "Any sufficiently advanced technology is indistinguishable from magic." I remember the behaviour from when I was about 7 years old, when I first started playing games on my ZX Spectrum, and 25 minutes into the load from tape I'd be muttering "please work, please work, please work" under my breath (I was used to loads failing regularly).

These days, I'm sure I do plenty of unusual things while using my computer, but I don't tend to notice them until somebody points them out to me. The thing is, like all good superstitious behaviour, the reason gets forgotten and only the ritual remains.

Occasionally I'll remember the reason why I do things in a certain way. Even more rarely, I'll see someone else doing something that I've seen myself do, and feel vindicated!

Here are some of my superstitious actions, and reasons why (where I remember them).

Left-Right click to bring up a context menu.

The normal way to open a context menu under windows is to right-click. I click left then right (like I'm drumming my fingers). Reason: Some programs interpret the first click (right or left) as a "focus" click, so a single right click doesn't always pop up a context menu.

Click-Enter instead of double click.

The normal way to navigate into folders in windows explorer with the mouse, is to double-click the folders. I use the mouse to select the folder, then press enter on the keyboard to open the folder. Reason: Sometimes double-clicks are missed by windows (if I have a poor mouse, or my finger movements are too slow). Also, after double-clicking for the hundredth time in the day, I start to feel the RSI kicking in... I've seen someone else exhibiting the exact same behaviour :)

Hyperlink click follow-up click.

After I left-click a hyperlink in a web page, I click in some white space on the page. I don't know why I do this. It may relate back to when I had a dial-up connection and I wanted to know if the page was responding. It may just be that I don't like the link being outlined. I really don't know about this one.

Mark as read, before delete.

In Microsoft Outlook I read email in the preview pane. If I don't navigate away from an email in my inbox, it stays marked as unread. If I don't need to action an email I don't just delete - I press "Mark as Read" then I press delete. Otherwise my "deleted items" folder draws attention to itself by being written in bold with a number beside it, telling me how many "new" emails I have in it.

I'll post more if I think of any.

Monday, 25 August 2008

Music to my ear

I first noticed it about 10 years ago. It only happens when I'm really tired, but can't sleep, and in the almost complete stillness of night. And it only happens very rarely, but when it does, it's amazing.

I was on a caravan holiday with my parents, and late one night, while I was trying to sleep, amongst all the other night time noises, I could hear a dripping sound coming from the refrigerator. The dripping was very regular, and sounded almost like it was a percussion instrument, setting the beat, setting the rhythm. And then something very strange happened. I started to hear music. Not just a simple tune, or melody, but I could hear the whole damn orchestra. I could hear the percussion. I could hear brass. I could hear strings, first playing legato, then pizzicato. I could hear woodwind. There were changing dynamics. There were melodies and counter melodies. I could hear the whole thing.

I knew it wasn't on a radio or cd player somewhere because the music was in perfect time with the dripping refrigerator. And I noticed something else about it - that I could decide where the music went. I realised that even though I was awake, it was almost as though I was dreaming, but whatever the explanation, the music was going on in my head. While it was happening, I remember feeling as though the music was just inside my ear.

At the time, I was a budding musician - and I was trying to come up with my own pieces. And when I heard this music, I felt that if I could only transfer it to paper, it would be a masterpiece. But I was also aware of the similarity between this music, and any regular dream, and knew that it would all be forgotten soon afterwards. Sure enough, after I woke up, I couldn't remember the themes, the tunes, any of what sounded so amazing at the time.

Over the years since, I've heard similar music, just inside my ear. It normally occurs when I'm really tired, but maybe after I'd been drinking alcohol, or caffeine, and I can't get my body to switch off and go to sleep.

I'd thought about looking it up on the Internet, but had no idea how to go about searching for it. "Music in my ears" seemed a daft thing to search for, so I let it be.

Just last year, I was in a bookshop, and saw a book on display called "Musicophilia - Music and the Brain" and as I was flicking through, I came across a letter to the author from a lady who had been hearing music in her ears when there was no external source of music. You can imagine my excitement when I realised I may have found something that could help me research what I was experiencing. The phrase "Musical Hallucination" was mentioned, and that was all I needed to type into Google.

The explanation that seems right to me is that tinnitus (ringing in your ears) is being picked up, not conciously, but subconciously by your inner ear, and your brain decides to interpret the ringing as music.

I've seen stories of people hearing choirs, christmas carols, and orchestras. There is advice on how to control the sounds and make them fade away - but to be honest, I actually enjoy the music. I only wish I could transcribe it as fast as I hear it!

I last heard the music a few weeks ago. I once got up in the middle of the night to sit at the piano and tried to repeat what I'd heard. I'm going to try to keep that up, and one day, who knows, I might be able to share my masterpiece with everyone else!

Sunday, 24 August 2008

Depth First Coding and Spirograph

Various analogies come to mind when I'm coding. Recently, the terms "Depth First Coding" and "Spirograph" have come to mind.

I'll explain.

When writing software, there's so much that needs to be done. But of all the items on the todo list, we coders prefer to work on new features than the other things like fixing bugs, dealing with security issues, general code "housekeeping" and basically doing what we know we ought to, but never quite finding the motivation for it. I know I certainly want to get to a stage where I can see the whole process of using my software from start to finish.

I think it's good practice to get a proof of concept up and running as soon as possible. As well as making sure you're not wasting your time with a project, you can get feedback right from the start, you can adjust your plan based on what you see, and let's face it - it's more fun - it can keep you motivated through the more mundane tasks.

To me, this feels like depth first programming - get the skeleton of the app complete, from top to bottom, then go back later and fill in the breadth - the bugfixes, the tidying up, the refactoring - the meat.

A breadth first approach would be to get all of the minutiae sorted - making sure each function was written as efficiently as possible, making sure the variable and function names were as descriptive as possible, commenting anything that wasn't clear, and basically trying to make each component as near to perfect as possible.

My problem with this approach is summed up in the phrase "Premature Optimization." Here we are going for perfect first time round, but at a cost. Then, if it turns out that the routine that we just spent the last day working on is no longer needed, then that's a day that's been wasted.

The way that a program grows, almost organically, reminds me of Spirograph. After the first iteration, a very basic, bare bones implementation is produced. After several more iterations a pattern starts to appear. After hundreds of iterations, the final picture is revealed. The pen traces a path around the spirograph, never lingering in one area for long.

This is how I see code growing organically. On top of the initial foundation, the coder grows the program piece by piece, often moving from one area to another. By jumping around the project, the coder doesn't get bored with the area they are working on.

But it's also worth noting that no matter how many times you go around a spirograph, and no matter how many different places you choose to start from, the pen doesn't touch every part of the paper - there are always some holes.

I wonder how the holes fit into my analogy...

Saturday, 23 August 2008


Every time I press Windows-D (show desktop), I think "FAIL".

I've laid out my shortcuts on my desktop. The shortcuts are there because they are the quickest way still available of getting to what I'm after. I say "still available" because the absolute quickest way, in my experience, is a WinKey shortcut, but there are only so many apps you can assign to obvious keys before you start having collisions.

Now for better or for worse, the layout of the stack of windows on my desktop, usually ties closely with the stack of tasks I have in my head. If I have 10 windows stacked on top of one another, chances are the topmost is what I was most recently using, the next is the next most recent, the one after that is the next most recent, and so on.

Ok, admittedly, 10 windows is a bit excessive, and I'll probably only remember two or three things that I was doing. But at work I use three monitors, so with plenty of screen real estate, spatial location of windows can also help me with remembering what I was working on and where it is in my mental stack.

Unfortunately, undoing "Show Desktop" doesn't work. The window stack is all messed up, and so now I have to spend time working out where I was. It may have only been for a few seconds, but if I was concentrating on something it could take a while to pick up all my trains of thought. If I'm frequently running the various shortcuts from my desktop, that's a lot of shoving my brain in and out of context. This is why I think of a "Show Desktop" as a FAIL.

I want to run my target app as quickly and seamlessly as possible. From this thought comes the seed of my idea for CloudLauncher.

As I've already mentioned, the most efficient way of doing something depends on your context. If I'm sat back in my chair, with a cup of coffee in one hand and the mouse in the other, there's no way I'm going to start typing a command to run an app. Similarly, if I'm in touch typing mode with both hands on the keyboard, it's going to slow me down to reach out for the mouse to click on a shortcut icon. Depending on context, Winkey, Quick Launch, or even the Start Menu may turn out to be the quickest way to achieve my goal. And when I happen to be in keyboard and mouse mode, Windows-D to show desktop, followed by a double click on an icon on my desktop is usually the quickest.

However, since showing the desktop (or hiding everything, as I like to think of it) has fallen out of favour with me, I'd like to write an app that can fill the gap - so a keyboard shortcut to launch my launcher app, then a click (or double click) to launch my target app.

I experimented with Skil, which almost does what I want, but it looks like you're constrained to an auto-arranged grid, whereas with the desktop you can place your shortcuts wherever you want.

Various ideas went through my head, such as a radial menu, where you group items at one level, then when you move through the group name it expands into another level of groups and shortcuts.

To save on desktop real-estate, the items could move relative to the mouse, similar to apple's dock.

Some shortcuts are useful when you first set them up, but become less useful over time. Laziness usually keeps me from cleaning up unused shortcuts. So I was thinking that shortcuts could gradually migrate towards the edge of the screen until they drifted off completely. (This is where the name Cloud Launcher came from).

But then I decided all of these can wait until version 2! You can fit a heck of a lot of shortcuts on one screen. Let's think about saving screen space when it actually needs saving!

I'll post more on how I'm getting on.

Friday, 22 August 2008


If I save up all my cardboard and all my cans and all my jars, in order to recycle them, then I drive to the recycling depot in order to recycle them, how much recycling do I have to do in order to "break even" where the benefit of recycling outweighs the "damage" caused by me driving there? (By damage I mean using fuel in my car).

Thursday, 21 August 2008


The wizards I see on a day to day basis don't go to Hogwarts. They live on my computer, and do things like installing applications.

Once upon a time, I used to carefully check every option on each wizard - the option wouldn't be there if it wasn't important, right?

So I'd agonize over which help files to include with spybot, and worry about whether I'd be missing out if I didn't include additional skins for winamp.

But after a while, the wizards start to get in the way. So what if German help files are installed on my computer and I don't speak German? Winamp in it's entirety is only a few megs - what am I saving if I carefully untick the "additional skins" checkbox?

So how about instead of presenting me with lots of options to tweak and adjust to my liking, why not just install with the defaults, and hide the optional extras out of the way for power users to dig out for themselves?

These days, I tend to click-click-click the next button until it does what I want it to do.

It doesn't work for the Zip extraction wizard in Windows XP though. When I tell a file to extract, first it tells me it's an extraction wizard (well duh!), then it asks me where I want to extract to (defaulting to the folder that contains the zip file - that's good enough for me), then it starts extracting. But the next button is still enabled. Which means I can start extracting into the same location again and again. But that upsets the wizard because he doesn't want to overwrite the files he's already written.

So I have to be careful when I play with the extraction wizard.

I'm sure there's a moral to the tale in there somewhere...

Wednesday, 20 August 2008

Cutting Edge

When working on a team project, how often should you update from source control?

Broken builds can take hours or even days to fix. On several occasions I have updated source code (sometimes even by accident) only to find that the build was broken. I would then spend the rest of the day trying (and failing) to fix the bug, go home frustrated, come in the next day, update my source code, and the bug had been fixed. On those occasions I felt like I could have just gone home when I found the build was broken and not come back until it was fixed, and that my time would have been better spent.

Of course, sometimes I'm the one who fixes the broken build - somebody has to do it. Which, if I'm honest, means I can't really justify the go-home-when-it-breaks approach.

Back to the question of "how often should I update?"

On the one hand, you could adopt the attitude of "I'm only interested on sub-component X. Therefore I'm not going to update the entire codebase unless I really have to." It's a sensible approach. It means you don't risk killing your productivity every time you update.

The trouble I've had with that approach is, whenever I found a bug in a section of the code, and asked the owner of that code about it, the first thing they would ask is "have you updated?".

On the other hand, you could get the latest code whenever it becomes available. The trouble with staying cutting edge is that not only do you get cutting-edge bugfixes - you also get cutting-edge bugs!

I guess like a lot of development issues, there is no right or wrong answer...

Monday, 18 August 2008

Destination Desktop

Once upon a time, I used to think that icons on the desktop just made a mess.

There's a saying - "Tidy house, tidy mind". Not only do I agree with the saying, I think it extends to computers too. As I've already mentioned, I want my computing experience to be as efficient as possible. Too much clutter on the desktop was counterproductive - especially when the dreaded Auto-Arrange was turned on! I've even gone as far as removing the My Computer icon.

A few weeks ago I started "allowing" myself to add one or two shortcuts to my desktop. Nothing overboard - a shortcut to a folder here, a shortcut to an app there, a shortcut to a batch file over there. But one folder wasn't enough. One app wasn't enough. Soon I needed access to a few folders, a few apps. When I had about 10 shortcuts, I started to cluster them on the desktop. Gradually, completely by accident, my desktop had become a launcher.

Using the desktop has pros and cons.

On the plus side, by arranging my shortcuts into clusters, spatial memory gets to play a part in finding my shortcuts, which could, in theory, speed things up. In a static, alphabetical menu I would have to scan through names in alphabetical order to pinpoint my shortcut, which is fine when dealing with files on a one-off basis, but using the same shortcuts on several-times-a-day basis, can get a little tedious.

Also, desktop icons are larger than icons in the start menu, which makes them quicker to hit.

On the minus side, I have to minimize whatever is in front of my desktop every time I want to access the shortcuts. This can really disrupt my flow.

So I decided that what I needed was an launcher app that could let me arrange shortcuts in a spatially convenient way, that could pop up in front of my other windows on demand.

The closest I found to this was Skil. It's close to what I'm after, but it ain't perfect. I want to be able to place my icons wherever I want - not confined to a side-by-side grid.

So if I don't find an app that does this, then I'll give writing it a go.

Update: 19th July 2008
How ironic. When writing about the cons of using the desktop as a launcher, I considered putting something about windows "forgetting" where you left the icons and "helpfully" auto arranging them for you. But I thought "I haven't seen windows forget my icon positions for a long, long time. Perhaps it's fixed."

I arrived at work today, my machine had crashed, and all my icons had piled up in a mangled mess at the side of my screen.

Saturday, 16 August 2008


Ok, maybe I wasn't clear in my inspiration post. Just to clarify: the "inspiration thing" usually only happens for me once in a blue moon!

Strangely, my time estimation ability seems the opposite way around to most people. I think most people tend to underestimate the "bigger picture" and it's only when they drill down to the sub tasks and really think about what each element consists of that they put a more realistic estimate together for the overall project.

I think I tend to overestimate projects on a whole (because I'm pessimistic / realistic), then when I try to estimate how long sub tasks will take, I forget how pessimistic I am and grossly underestimate them.

Many years ago, I came up with a rule of thumb: ANYTHING you do on a computer will take 10 times longer than you expect it to. The simplest of tasks always seemed to take an order of magnitude longer than I had planned. (I wanted to call it "Tim's law of computing" - then I discovered Douglas Hofstadter already coined a virtually identical law).

I still forget the rule of thumb, even today.

Saturday, 9 August 2008

Code Re-use

I've always believed re-using code is a good idea, but it's not until recently that I've felt that I'm doing it properly

When I programmed for myself (before I was hired as a programmer), I wrote in C++. I tried to make libraries (libs and dlls) for re-use in future projects. My friend / housemate / mentor Denis advised me not to bother making libraries and just copy the code into the latest project. I guess for personal projects there isn't really any point wrapping up functionality in dlls. But what about bigger projects? I'm starting to think that even with bigger projects, it's not worth the hassle...

Denis gave me some more advice, when I was trying to put an engine together to help me write games. "Write the game, not the engine". Which I suppose is correct if you want to write the game, but if the process is more of a learning exercise, then i think it's right to work on the engine. But then, I suspect that every closet games-coder is working on their own engine, and a lot of games don't end up written.

I used to write whatever kind of module I thought was necessary for my game, then, when the time came to pull all the code together, I planned to lift all the code from each of the modules and combine them to create my engine / game. Needless to say, I never got round to creating my engine, or my game...

At work, I write tools in C#. Suddenly, I have lots and lots of scope for re-use. I'm not 100% sure of the reason for this, but I feel that the C# language itself makes itself very amenable to code re-use.

Now I don't write everything with re-use in mind. If I only need to use something once, then the time spent making it more generic is wasted. The first time I need a tool to do something, I'll write a very specific tool for the job. Then if I decide I need it again, only then will I make it generic. This process seems to serve me well enough.

So far, I've written the following:

A generic options dialog that can be plugged into any app I write in future. (Currently can store options that are strings or bools - more types planned for in future).

An Argument Parser. (Pass in the args from your program, and then you can ask it for string values or if a bool option is present).

A Serializer. (Until I wrote this, every time I wanted to load or save a binary object I had to create the file reader / writer and the serializer separately, then pass my data to the serializer - now I just create my generic serializer.)

A Cascading Checkbox Tree. (Out of the box, the treeview doesn't propogate ticks up or down the tree. My cascading checkbox tree does.)

A class to convert absolute to relative paths and vice versa.

A class to handle File Operations. (Typically, things like "Close" go through a process of checking if the file needs saving; asking the user if they want to save it; save the file; if the file doesn't have a name present the user with a save dialog; if the user wants to open a file, check that their current file is saved; and so on. My File Operations class handles all of this for me.

And even an OK-Cancel form. (I always seem to be writing dialog boxes that have OK and Cancel buttons on them. And I always have to change the text on the buttons, change the names of the buttons, set up the accept and cancel properties of the form, tell the form to close when either button is pressed. Now I just re-use my OK-Cancel form.

Re-use rocks!

Wednesday, 6 August 2008


Undo is more difficult to implement than it appears at first glance.

A typical implementation of an undo manager will store a list of undo data (the UndoList) and a list of redo data (the RedoList).

Undoing an action will pop the last action from the UndoList and push it onto a RedoList. Redo will perform the reverse - pop from the RedoList, push onto the UndoList.

At its simplest, undo / redo data will contain the 'before' and 'after' states of some object in the application, and the function command that will operate on that data.

So for example, if an action adds the next character 'd' to the Collection "MyCollection" (which currently contains 'a', 'b' and 'c'), the undo data might look like this:

Before: Collection c1 = "a, b, c"
After: Collection c2 = "a, b, c, d"
Command: func(Collection before, Collection after)
MyCollection = after;

If we ask the undo manager to undo this action, 'func' gets called with 'before = c2', and 'after = c1'. If asked to redo the action, the 'before' and 'after' are switched, so 'before = c1' and 'after = c2'.

Easy, right?

Some questions: How do we create c1 and c2? Are they each a clone of "MyCollection"? What if the collection contains something a bit more complicated than characters, say class objects? What if the objects are collections themselves? Should c1 and c2 be shallow copies or deep copies of MyCollection? How expensive is it to deep copy c1?

In adding 10 items to the collection (say 'a' through 'j'), the first undo action will consist of an empty collection, and a collection containing 'a'. The undo action for adding 'b' to the collection will contain a collection containing an 'a' and a collection containing an 'a' and a 'b'. The next action will contain {'a', 'b'} and {'a', 'b', 'c'}. By the 10th action, there will be 100 items in the undo manager (({}, {a}) + ({a},{a,b}) + ... + ({a..i},{a..j})). Not so bad with characters, but what if you have more complex classes?

And what if those class objects are registered with callbacks elsewhere in the code to have things happen to them in certain situations? Should each instance of the class object in the undo manager remain registered? Should items added to the undo manager have their callbacks revoked and reinstated when the relevant undo / redo command is called?

So instead of storing 'before' and 'after' undo data, an alternative would be to store the data and command required to change the data from the 'before' state to the 'after' state (and the command and data required to change the data back) - 'undo' data and 'redo' data respectively.

So in the character collection example above, the first undo data might look something like this:

UndoData: Item index = 4
RedoData: Character chr = 'd'
UndoCommand: func(UndoData und)
RedoCommand: func(RedoData red)

Easy, right?

Some thoughts: The UndoCommand is essentially a "delete function", and the RedoCommand is essentially an "add function".

So when creating the "add action", let's pass the "add function" and the "delete function" as the UndoCommand and the RedoCommand respectively.

When creating a "delete action", let's pass the "delete function" and the "add function" as the UndoCommand and the RedoCommand respectively.

So what was 2 functions before the undo manager (the "add action" and the "delete action") has become 4 functions with the undo manager (the "add action", the "add function", the "delete action" and the "delete function").

Every action has been split into a "put these commands into the undo manager" function, and an "execute the action" function. That's a whole lotta work to retro-fit the undomanager into an existing app.

Final situation to think about. How do you undo continuous changes? I use the mouse to drag a UI object from point A to point Z, but before I release the mouse button I've dragged the UI object through every point in between. Do we store every intermediate point in the undo manager, or just the start and end points?

It's a rhetorical question; of course I'm only going to store the start and end points. How?

Without an undo manager, every mouse-move event could simply set the UI object's x and y co-ordinates.

Mouse::Move(int x, int y)
ui_object.SetPosition(x, y);

UIObject::SetPosition(int x, int y)
this.X = x;
this.Y = y;

How do we make moving a UI object undoable? We're going to need to split the action up into distinct parts.

While we're dragging the object, we would like to see it being displayed in its current location, so the X and Y values will still need to be changing continuously. What we'll need to do is remember the start location when we begin dragging, and the end location when we finish dragging, then add those locations to the undo manager, along with appropriate functions for setting them.

We end up with something like this:

Point startPos = ui_object.currentPos;

ui_object.SetPosition(x, y);

Point endPos = ui_object.currentPos;

if (startPos != endPos)
UndoAction act;

act.UndoData = startPos;
act.RedoData = endPos;
act.UndoCommand = UIObject::SetPosition;
act.RedoCommand = UIObject::SetPosition;

So what seems like it should be a simple enough mechanism, actually turns out to be pretty darn complicated...

Saturday, 12 July 2008


Isn’t inspiration brilliant? You estimate how long tasks will take on your todo list. You put down "1 hour" for one of your tasks, because it "could be easy, but there could be some little nuances that make it a little tricky." One minute into the task, you think "hang on, what if I do this" and bang, the task is complete :)

Where do you get inspiration from? Well, that comes from practice…

Saturday, 5 July 2008

Dynamic PropertyGrid

People seem to be stumbling across my blog after searching for how to create a dynamic property grid with attributes in c#.

Presumably, people want to be able to hide some fields, and make others read only, which can be done easily at design time by adding "Browsable" and "ReadOnly" attributes. But try to find any information on the Internet about changing those attributes at runtime and you'll find yourself crashing straight into a brick wall.

From what I gather, attributes are compiled into the executable, so being allowed to change them dynamically at runtime would mean making changes to the executable - which is probably a Bad Thing(TM). Which means it can't be done.

What you end up having to do, is creating a dynamic class, on-the-fly, containing the fields that you want to appear in the property grid, then creating property descriptors per property. Then when the property grid is dismissed, copying the data out of the temporary class back into your original class.

There's a sample over at CodeProject.

Good luck!


Whenever I have to work on a new computer for any length of time, I'll start to customize it. Naturally, I have the most customizations on my home computer; my work machine comes a close second, and any other machine I work on will have some of my customization footprint left behind.

I won't refuse to work on a machine until it suits me - I don't have the time to put all my tweaks on every box I touch, but I will add my personal touches as and when I need them.

Looking at other peoples must-have utilities, registry hacks, and firefox plugins, I will often look down the list and not see a single thing that interests me. Because I don't install something that "could be useful", but rather go hunting when and where I find a niche. I'll think "wouldn't it be good if there was a program or tweak or plugin that did xyz?" And if I can't find the add-on that I'm looking for, I'll write it for myself.

The majority of my customizations are to help me get to what I want as quickly as possible.

My first customization is switching the start menu to classic mode. The "favourites" list is too ephemeral for my liking. I like my shortcuts to stay where I put them!

Once I've tried searching for a file a few times with windows explorer, I'll install TweakUI, one of Microsoft's PowerToys, because I'll quickly get fed up of having to specify that I want to search through "All files and folders" every time I search. There are lots of things that can be tweaked with TweakUI, but "Use classic search in explorer" is the main tweak that I use.

Winkey is a must have. I can set up any keyboard shortcut I want using the windows key and other modifiers in conjunction.

In winkey I'll have Windows + i = firefox (the "i" stands for "Internet"). Windows + n = notepad (indispensable to a programmer - none of this wysiwyg milarky!). Windows + s = command prompt ("s" stands for "shell"). Windows + C = Calculator. Out of the box you get Windows + F12 = regedit, and windows + 1, windows + 2, windows + 3 etc corresponding with your drives c:\ d:\, etc.

If I have some network drives mapped in My Computer and I'm disconnected from the network (and sometimes even when I am connected to the network) the My Computer window becomes slow to start up. Once I have winkey access to my drives, I have very little need for the My Computer icon. In fact, if the icon is available on my desktop, I can accidentally double-click it, and instantly regret it when I have to wait 30 seconds before my computer becomes responsive again. So I'll go back into tweakUI and remove the My Computer icon from my desktop.

In firefox, I will always install the Adblock extension (I hate ads!). I will always install the google toolbar for firefox. I will always tell the tabs bar to be visible all the time. (If the tab bar only pops up when a second tab is opened, the contents of the entire page shifts downwards and breaks my concentration).

Baregrep is a lovely gui grep tool which I can use to find files containing certain strings.

Certain apps "forget" that they were maximized, and when they are run again, they start up "almost" maximized. So when I click the hotspot at the top right of a maximized window (the X) to close the window, I end up closing the window behind the one I intended to! Extremely annoying! So I install AutoSizer to make sure that those apps which exhibit this behaviour always open maximized.

That's about all of my customizations that I can remember. If I remember any more I'll add to this list.

Friday, 4 July 2008

Many Ways to Skin a Cat

Today's operating systems offer many different ways of achieving the same thing. To open a recent Word document, you could find it in the recent documents menu, open Word and find it on the recent files list, open Word and use the open file menu option and navigate to the file, you could find the file in Explorer and open it from there, or you could open the file from any other shortcut that you have set up.

A few years ago, I found that the "many ways to skin a cat" approach made it difficult to teach people to use a computer (particularly when I was teaching my parents). They were keen to learn, and though I tried to be consistent when trying to teach them, I would often accidentally use an alternative method to open a file and end up confusing them.

Now, however, I have come to the conclusion that the different ways of achieving your goal enable you to be more efficient at what you are doing.

If I have accessed a file recently, then chances are good that I will want to use it again. So I have a recent documents menu on the start menu. Now the recent documents menu might keep the file long enough to serve its purpose, but other, more recently used files will inevitably demote the file until it no longer appears on the recent documents menu. If I still need a file after it has been bumped off the recent documents menu, it might still be in the recent files menu of the application that it's accessed in. If it gets bumped off of that menu, then I'll set up a shortcut to it.

Now I try to keep a tidy computer, because it keeps my computer use efficient. I try to keep my desktop as clear as possible. I'll use the desktop as a temporary storage bin - an inbox, but I'll try to clear it out as soon as possible.

The reason is, items tend to gather on a desktop in no particular order. Even if the desktop is arranged in such a way that you remember what went where, an accidental sorting of the icons, or a resolution change moves the icons out of place, your spatial memory is no longer accurate, and the shortcuts are no longer as efficient as they were.

So, as you may have gathered I won't keep my shortcut on the desktop. If I have the screen real-estate, I will have up to two toolbars on the taskbar to contain my shortcuts - the quick launch toolbar for shortcuts to programs and individual files, and, if I have the space, a "folders" toolbar which contains shortcuts to the folders that I access most frequently. (If I don't have the space for two toolbars, I'll keep everything in the quick launch toolbar). Each of these toolbars will contain a hierarchy, so as not to have massive, sprawling lists to wade through.

These toolbars usually end up containing the files and folders I access 99% of the time. I'll occasionally have a quick spring clean and delete anything that has become obsolete, but usually the hierarchy keeps things getting too messy.

Next post I'll talk about customization in general.

Wednesday, 18 June 2008

Early Out

A quick post on multiple return points. Some people will argue that they are a Bad Thing(TM), but I would like to post my reasons for using them in my code. First of all, what do I mean by multiple return points? I mean that a function has more than one return statement in it.

function quiteMessy(param1, param2, param3)
if (precondition)
return 0;

for (i = 1 to 10)
if (something(i))
return i * param1;


if (condition(param2))
return param2 * param3;


val = doSomethingWith(param3);
return val;

In this code, there are no less than 4 return points. The trouble happens when you have temporal coupling (requiring that things happen in a certain order).

If this code was written with the requirement that doSomeStuff() was called before exiting, and then the precondition / return 0 code was written afterwards, then the requirement is not met. So yes, in this case, it was wrong to have the multiple exit points.

But I would argue that there are good reasons to write code with multiple exit points.

Yesterday's post recommended that a function should only do one thing. If when calling the quiteMessy() function we expect to calculate a value and call doSomeStuff then that's two things, not one. If we put the two things into separate functions, we would remove the temporal coupling within the quiteMessy() function.

Ok, so reducing the amount we do in one function makes it easier for us to return early on in the function, but I still haven't given you my reason for multiple exit points in a function.

My reason is to improve readability of the function. I tend to base my functions on two possible patterns.

The first is that the entire function's purpose happens at the end of the function. I see the rest of the function as a crescendo, leading up to the one thing that the function does.

If I was going to return early from a function, I'd try to do it as early as possible. So I end up with some kind of precondition at the top of the function (check if it has already been done, check if it's valid to do this now, etc), followed by any setup code which is necessary for the build up to the final action of whatever it is that I want the function to do.

function calculateSomething(param1)
// preconditions

if (nonExistent)
return null;

if (invalidParam(param1)
return null;

// setup result

foo = preprocess(param1);
bar = barify(foo);
baz = bar.getBaz();

// return result
result = baz.getValue();
return result;

In the code sample above, even though there are three return points in the function, I've kept within the guidelines of the pattern. The first two return points are within the first two statements of the function - the preconditions, there are no return points during the setup of the result, and the main return point is on the last line of the function.

If that sample was written without multiple return points, I would have to nest the calculation within both preconditions.

function calculateSomething(param1)
result = null;

if (exists)
if (validParam(param1)
foo = preprocess(param1);
bar = barify(foo);
baz = bar.getBaz();

result = baz.getValue();

return result;

It no longer feels as though calculating the result is the important thing. I prefer to keep my focus at the top level of the function, rather than nesting several layers in. Nested code is a deviation from the straightforward path of the function, and makes the code more difficult to understand.

The second pattern I base my functions on is doing the thing that I called the function for at the first opportune moment.

One possible example of this would be finding a value.

function findThing(param1)
for (ix = 1 to itemCount)
candidate = item[ix];

if ( == param1)
return candidate;

return Not_found;

Searching through the items in this code sample, I would like to return the result as soon as I find it, but I also have a return point at the end of the function in case the item wasn't found. Without multiple return points, I would have to store the result in a temporary value before returning it - what a waste of time that would be!

Another example of this pattern would be if I had a cached value, or both quick and slow methods of getting the result. (Presumably the quick method has drawbacks, otherwise you'd never use the slow method!).

function calculateOptimal(param1)
cachedResult = cached(param1);
if (cachedResult != null)
return cachedResult;

if (canCalcQuickly(param1))
return calcQuickly(param1);

// otherwise

return calcSlowly(param1);

I dread to think what the single return point version of this code would look like!

Earlying out of a function is fine, as long as it makes the code easier to read and understand.

One task, One place

Today's advice is nicely summed up by the saying "A place for everything, and everything in it's place." Like any of my advice, you can choose to ignore it, or even contradict it. If anyone has advice I like better, I'll follow their advice instead.

The DRY principal (don't repeat yourself) says that any complex task should only be carried out in one place.

Why? The end user doesn't care if you repeat yourself. The computer certainly doesn't care if you repeat yourself. In the end, it turns out that the people who care whether you repeat yourself are the developers of your code (including yourself). Again, it boils down to programming for people.

Duplication is a Bad Thing(TM) for several reasons. The most obvious reason is that if you have some duplicate code, and discover a bug in that code, you could fix it in one of the places and completely forget about the other places. So in fact, the bug isn't fixed. Now if the duplicate code had been written as one function that was called from the places where that code was needed, instead of duplicated, the bug fix would only need to go in one place.

(I'm going to ignore the advice and repeat myself here because this is important!) The bug fix would only need to go in one place! When you've come up with a fix, there's no need to trawl through the codebase looking for all the places that the fix needs applying because the places that need this code all pass through the single place that you have applied the fix.

The second reason to avoid duplication is that duplication makes code more difficult to read, and comprehend. Which is easier to understand - 10 lines of code, or 20 lines of code? For the sake of argument, let's assume that each of the individual lines is as understandable (or obtuse!) as each other line.

Less lines of code means less to understand, means less to misunderstand! Sure, if someone takes the time to pick apart the code line by line, they might eventually understand it. But first glance isn't going to give us as quick an overview if there is twice as much code there than is really necessary. Also, if you wrap the duplicate code up into a single function, even the function name can help comprehension. Compare


function bigFunc()
baz(bob.foobar(X, Y, Z));
abc_def(fred, bob, saz.ABC);
baz(jane.foobar(X, Y, Z));
abc_def(sam, jane, saz.ABC);

vs Sample2:

function bigFunc()
arrangeMeeting(fred, bob);

arrangeMeeting(sam, jane);

function arrangeMeeting(Person p1, Person p2)
baz(p2.foobar(X, Y, Z));
abc_def(p1, p2, saz.ABC);

Now the arrangeMeeting() function may still not make much sense, but at least we now know that this chunk of code is meant to arrange a meeting between people. If we are interested in the details of what the bigFunc() does, we only have to read through the horrible mess of bar(), baz() and abc_def() once (if at all). And of course, any mistakes in the meeting code will only need to be corrected once in the arrangeMeeting function.

The two samples lead quite nicely to my second point today, which is:

In any one place, only one task should be carried out.

Instead of having to digest the entirety of the bigFunc() in sample1, (and possibly getting indigestion), the code in bigFunc() in sample2 is much easier to digest. The code that deals with arranging meetings has been moved off into its own function, and we are left with much smaller, bite-size chunks of code to deal with.

If you find that you're writing one comment about several lines of code, several times in a function, then each of those chunks of code could well be ripe for turning into a function of its own.

A program should be like a well structured document, with an overall view, chapters (modules), headings (classes), and sub headings (functions), and it should be easy for a reader to drill down to the place that they're interested in. You can go a long way towards achieving this by having a place for everything, and keeping everything in it's place.

Tuesday, 10 June 2008

Open Source User Interfaces

Why can't open source user interfaces be, well, more consistent?

Don't get me wrong - I love open source; but inconsistent interfaces really tarnish the experience for me.

Computers, operating systems, software and interfaces are all a means to an end. That end might be to write a letter, check your email, or play a game, but generally speaking, getting on the computer isn't the end itself.

A good user interface lets us get on with what we want to do in a timely manner. If the interface is consistent with other, similar applications, then we can dive straight into a task without having to learn a new interface. Conversely, a poor user interface can be a real impediment between the user and the task that they're trying to perform.

The developers of firefox have the right idea. The software is cross platform, but feels like an app that is developed for the OS. From the point of view of an MS Windows user, the menus look and behave like real MS Windows. The open and save dialogs are the MS Windows common dialogs. In KDE, the app looks like a KDE app. On a mac the app looks like a mac app. Because the developers of firefox have spent time putting on the UI polish.

Once you have the consistency, you get more efficient at performing the mundane steps between starting up the computer and performing the task that you switched on the computer for in the first place.

Common actions include opening a file, saving a file, closing a file, closing an application. There are often several different ways of performing each of the common tasks - including clicking a button, clicking a menu, using a keyboard shortcut (Ctrl+O), or using accelerator keys (Alt, F, O).

Take away the consistency, and in a small but significant way, you are alienating your users. The beginners, who have only learned one of the methods, get confused and discouraged, and unless you are lucky will leave your program for one that seems more familiar. The power-users will get annoyed that they can't use their tried and tested shortcuts in achieving their goals.

With cross-platform software, there are (at least) two possible approaches to UI design. The first is: you could make the UI consistent across all platforms. The second is you could make the UI on each platform consistent with that platform.

The first approach is probably easier from a developer's point of view, but probably not the best from the end user's point of view, because most end users don't work cross platform.

The Gimp is a nice, free (open source) alternative to photoshop; but every time I try to get into it, I feel lost. Ok, so the menus look a bit like my native Windows menus, but using the file dialogs is like stepping into a foreign country. Nothing is familiar. What is generally an automatic process for me (navigating to a folder and providing a filename) suddenly becomes something that I'm forced to learn again from scratch.

The gimp uses GTK to provide their user interface. The GTK library is cross platform so the developers of an application don't have to worry about making the interface work on different platforms. But it would be nice if the GTK library could use native UI elements instead of emulating them.

Monday, 9 June 2008

Data Driven

Is your computing experience software driven or data driven?

If someone's approach is software driven, the software will be started up, then the user will press the "open" button, then use the file dialog browser to navigate to their file, and open it up.

Someone whose approach is data driven will navigate to the file, and double click it, and the file will (hopefully) open in the relevant application.

For me, data driven is the more efficient approach to working with a computer. It's a more natural approach. The end user doesn't decide "I want to use Microsoft Word to edit a letter". They decide "I want to edit the letter to the school," and it just so happens that Word is the software that gets used to edit the letter.

In a maximised explorer window, in a details view, I can usually navigate to the file that interests me within a few seconds. (You can fit more files in a window if you use list or small icon view but then your eye has to search in two dimensions, while alphabetical order only really works in one dimension - which is why I find it most efficient to navigate in a details view.)

If I were to try to open the same file using the application's open-file dialog, it would easily take me twice as long to open the file that I'm interested in.

Occasionally a piece of software will add some useful mechanism to speed up your file loading - the automatic loading of your most recent file, a recent file list, a frequent folders list, or a dialog that you can maximise - but the trouble is, by their nature, software-driven data-loading mechanisms are specific to the software and you can't take advantage of them across all of your software and all of your data.

With a data driven approach, you can set up shortcuts to your frequent folders, and use those shortcuts again and again, regardless of the software that will be used to edit your data.

To me, there's no contest. Data driven beats software driven every time.

Friday, 6 June 2008

C# Attributes

Just a short post today. Though I'll try to stick to generally applicable programming techniques / articles / entries, I'll occasionally go language or technology specific.

I'm loving working in C# - I only really started C# programming about a year ago and I'm learning more and more about it and really enjoying it.

When I discovered the Property Grid, I thought it was fantastic. The property grid is used to expose properties to the user. The user can view or edit these properties, depending on whether get and set methods are defined. You can do a whole host of things with the property grid, such as putting properties into categories with the Category attribute, specifying default values with the DefaultValue attribute, and you can even specify that a property isn't shown in the grid with the Browsable attribute.

Pretty soon after learning about the property grid, I was using it for everything. Then one day I decided that it would be useful if I could dynamically change what was shown in the property grid at runtime.

I spent a whole day looking for information on how to change attributes at run time. I learned about reflection, I learned about creating attributes, I learned about applying attributes. But I couldn't find out how to change attributes dynamically.

I could have saved a whole day if I'd learned at the start that attributes are compiled into the executable and can't be changed on the fly.

So I'm posting this so that there's a million to one chance that if somebody else is searching for how to change attributes dynamically, they get to read this early on and don't waste the amount of time I did.

Update: 5th July 2008
I've posted some more information on dynamic property grids, in case you were looking for it.

Thursday, 29 May 2008

Evolving features

Yesterday's post ended with a teaser - why store the result of a function before returning it? The answer is (unsurprisingly, given the nature of this blog) to make it easier to access for a human. Specifically, when somebody (most often myself) is debugging a function that I have written, it's much nicer to be able to see the result of a function call immediately in the debugger or watch window, than having to step into the function and see what the result will be.

Today I'd like to post a response to Abhinaba Basu's The WOW factor in software. There are quite a few blogs that I'm reading at the moment, Abhinaba's being the most recent I've added to my list, and I'm enjoying reading it.

Abhinaba talks about the "nifty" features that turn good software into great software. Then he mentions

"the other category of software which doesn't work in first place and tries to be smart on top of it. There's nothing worse than this. You look at these in disgust and head over to the dumber but working competition."

Now the end-user doesn't care about how many hours a programmer has put into a project, lovingly crafting it, starting from an empty source file, and putting not just time, but pouring their heart and soul into a piece of software. The end-user doesn't care that the software is like one of the programmer's children. No, the end-user wants a piece of software that will do what he wants it to do, and won't crash, or loose his work. And Abhinaba is right when he says that the user will look at these in disgust and head over to the working competition. But I get the impression that the message is "get the thing working before adding the polish".

My point (I knew we'd get to it sooner or later) is that when a user sees this happen, they would assume that the programmer has been working on the nifty features to the detriment of the core functionality. But I don't think this is true.

Programmers don't like their creations to have bugs in them. The bugs are a poor reflection on themselves. They don't deliberately leave bugs in, add some sparkle and hope nobody notices the flaws. Software is released when the creator thinks it is ready. Unfortunately, the software isn't always ready and as we all know, bugs are always ready to rear their ugly heads, right when we least expect it.

In an ideal world, maybe all the core functionality would be written and tested, and there would still be budget for the extras. In reality, what often happens is once the core functionality is in, the product is shipped and we're off onto the next task. If features aren't evolved alongside the rest of the project, then they'll often fall by the wayside.

Recently I was writing a tool for drawing flow diagrams. Now basic functionality was there - lines could be drawn from one shape to the next. Straight lines. No routing. If you wanted to go around corners, you could add nodes - one at a time - to the line, and drag these nodes into the correct place. This was painstakingly slow, but it was functional - it provided all the functionality required to describe flows.

But then in my own time I spent a whole day putting in automatic routing for lines. Now lines will have a sensible number of nodes, in sensible positions, by default. This makes the tool a lot more natural to use. Is it required for core functionality? No - it's a feature. Is the software finished? No - there are still plenty of bugs to be ironed out and core work to be done. It's constantly being updated. But before I added automatic routing, it took ages to draw a flow. Now it takes minutes.

Shock! Horror! Working on features when there's still core work to be done! But if I hadn't written these features before the core work was finished, then they wouldn't have been added at all - it would have been time to move on.

If you want your software to stand out from the rest, you need the wow factor. If you want the features for wow factor, then you need to be working on them NOW. Otherwise, they'll be left behind - as will your software.

Wednesday, 28 May 2008

Dots and Double Dots

I love the phrase "train wreck". To clarify, I'm not talking about real life transport accidents here. I'm talking about the phrase used to describe code.

I think I first discovered the phrase in Code Complete, though I can't pin down the exact quote.

Basically it refers to code that refers to members of members of members of members... so for example

Ammo GetCurrentAmmo()

(I guess the dots "." are the coupling between carriages and the method names are the carriages. Or something).

The phrase seems to say "some terrible disaster has happened here". When there are that many decompositions in a single line of code, if the disaster hasn't already happened yet, it soon will.

One accident (or several) waiting to happen here is - one of the functions along the way could return a null pointer. If any of the intermediate elements is an invalid value then this piece of code is going to crash.

So at the very least, we should be testing for null values. Now, say we wanted the inventory of player 1, and assuming we know that the PlayerManager is valid, one way of writing this (with null value testing included) is:
if (PlayerManager.GetPlayer(1) != null)
inv = PlayerManager.GetPlayer(1).GetInventory();

However, I'm not happy with that piece of code. I'm an advocate of Programming for People. But a person reading that code sample would have to do more mental decomposition than necessary. Which I would find impolite. Here's what I mean by mental decomposition:

First we read "PlayerManager" (right, so I've got this player manager), then we read ".GetPlayer(1)" (right so we've got player 1 of the player manager). Then we test this for null with " != null". (Ok, test if the player manager's player 1 is null).
Once we've got past this line of code, it would be nice to assume we could throw away our mental stack, and start with a clean slate, if you will. So we've passed the test in the "if" statement. Then we're going to assign to our inv variable. We'll take the playerManager, we'll get player 1 of the player manager, then we'll take the inventory of player 1 of the player manager. Ouch - my head hurts.

I would nearly always write that previous code sample as the following:
Player player = PlayerManager.GetPlayer(1);

if (player != null)
inv = player.GetInventory();

Now, when I read that code, I see that Player 1 of the player manager is being assigned to a local variable - so I know this player is of interest. So I'll keep this player in my mental stack. Then I'll test that "this player" is not null. Then if I pass the test, I'll get the inventory of "this player". And this time, I had less mental decomposition.

And if nothing else, we've given the code more room to breathe! The train wrecks of the first example are hard on the eyes - it's hard to see where one element ends and the next begins. With the rewrite, the test and assignments are in much smaller and easier-to-digest chunks.

Rewriting the entire example, I would almost always write out the original train wreck as follows:
Ammo GetCurrentAmmo()
Player player = PlayerManager.GetPlayer(1);

if (player == null)
return null;

Inventory inv = player.GetInventory();

if (inv == null)
return null;

Weapon[] weapons = inv.GetWeapons();

if (weapons.Length == 0)
return null;

Weapon currentWeapon = weapons[0];

if (currentWeapon == null)
return null;

Ammo currentAmmo = currentWeapon.GetAmmo();
return currentAmmo;

The Law of Demeter says that you're only allowed to talk to your immediate neighbours. Strictly in the example above, I think I am allowed to ask the PlayerManager to do something, but I'm not allowed to ask the player to do something. Or the inventory. Or the weapons. And so on. But there are always exceptions to rules.

In this case, I think that if the GetCurrentAmmo function is in a class that made the most sense to know about getting the current player's ammo, then I'm fine with the multiple levels of indirection here. Otherwise each of the intermediate classes between this class and the Weapon class would need a "GetCurrentAmmo" forwarding function, which I feel would be a lot of extra code for little gain.

One last note on the code example - I've stored the result of the final "GetAmmo" function call before returning it. I'll explain why next time :)

Whenever I see the same "dot" phrase being used more than once in a function, I cringe. If the dot phrase happens to be a "double dot" phrase, I'll cringe even more. If a phrase repeats when it has more than 2 dots in it then I'll be sad that the person who wrote that code wasn't Programming for People.

Tuesday, 27 May 2008

Crossing the Road

I was crossing a road after just walking past a woman with 2 kids - one kid in the pram she was pushing, the other on foot.

The kid on foot ran across this road, without looking, and a taxi driver had to slam on his brakes to avoid hitting the kid.

The woman yelled "Slow Down, Asshole!".

I was bemused by this; and you know how you always think of the right thing to say AFTER it would have been the right time to say it?

Well I thought to myself, what I should have said in that situation was "Just because your child runs across the street without looking, it's no excuse to call him an asshole!".

I wish I'd said it at the time. The story makes me smile, even now.

Sunday, 25 May 2008

Programming for People

I believe that the best piece of programming advice I ever read was "Write your code so it's easy to read."

I've been programming for about 20 years, and I've read advice like "always comment your code" and "choose sensible variable names", but it never really sunk in. I didn't understand the reason behind this advice. Now in the last couple of years I think I finally understand the reason why: Code is written once, but read many times.

It might take you a long time to write a good, solid routine that performs the exact function that you want. And it might take you even longer to write the code so another person can understand it, so you think "why bother?" But at the end of the day, the investment is worth it, because whilst you only write that function once, coders will read that function many many times. And when I say coders will read that function, I'm also including you. You will re-read that function. Whenever you need to perform some maintenance coding, when you need to make a subtle change to the effect of the function, when you need to refactor your code - even when you're just looking through the code, trying to find "that routine that I wrote last week - I know it was something to do with...", you'll appreciate the extra effort you went into to make the code easy to read.

Because I finally came to realise, that if I want to be a good programmer, then I need to write my code primarily for a person to read, and only secondarily for a computer to read!

That was probably a bit of a controversial statement there! But the way I see it, for all but the most trivial functions, if a piece of code is written without human-readability in mind, and there's a bug, it's gonna take a heck of a long time to track that bug down. But if the code was written for a person to read, then it will take a fraction of that time to find the bug, because less time is spent trying to understand the code - the code is amenable to human understanding, and so time can be spent, more effectively, hunting down that bug.

In conclusion:
  • Code is Write Once Read Many
  • Write the code so it's easy to read
  • Code primarily for a person to read, and secondarily for a computer to read.

Blog Motiviation

I've made several attempts at blogging, with varying degrees of success (in terms of readership, content, schedule, etc). Previously however, I haven't really had a focus for my blogs.

Right now, I would like to write what I would hope would be interesting, informative, entertaining posts on topics that appeal to me. Hmm. Sounds like the description of any blog in the world. D'oh.

Ok, let me try again. In the last couple of years I've become a keen reader of technical blogs on the internet, and I think I'm a better programmer because of them. So now I'd quite like to have a go myself.

So topics (as the title of the blog suggests) will probably have a focus on programming, but I may slip in anecdotes which I find amusing from time to time as well.

I have no idea what the schedule might be. I'd quite like to commit to posting regularly. I've been noting down topics over the last few days and hopefully I'll be able to come up with new content regularly too (since the downfalls of my previous blogs seem to have been lack of inspiration).

Enough rambling. On with the blog.