Monday, April 16, 2007

Don't Soft Code

You have probably been taught to not hard code parameters in your code. For example rather than saying:

if (connections > 50) { // do something

You should probably write:

if (connections > MAX_CONNECTIONS) { // do something

and if you are feeling particularly ambitious, MAX_CONNECTIONS should be set by a config file, so that a sys admin or operator can change this value at deployment time.

This has been a driving force behind my designs for some time now. I try to not hard code any values in my programs. Sometimes expediency leads me to do it anyway, but at least I feel guilty for doing it. Unfortunately, I have found that avoiding hard coding values is not the panacea that is promised. You now have code that is effectively split up (part in the code, part in the config file) which can make it harder to read/understand the code. I have also seen that if the config files get large/complicated enough, no one but the programmer is going to be willing to touch this file, and it really becomes just part of the program.

I am writing this now, because I recently read an article on about Soft Coding. The article makes a very good point - sometimes "Hard Coding" values is better than the alternative. Rather than reiterate why, I'll just tell you to go read the article. The main moral of the story though, is to remember that every design decision involves trade-offs, and it is important to consider the pros and cons of every choice. In the specific case of choosing when to hard code a parameter - you should hard code it if it will be at least as easy to maintain/change the code as it would be to change/maintain/understand the "soft coded" parameters. It is up to you, the developer, to make this determination.

Monday, April 9, 2007

Gone Programmin' (back next week)

The current TopCoder marathon match runs through this Wednesday and only the top 50 (out of 200 from last week) advance. I am currently in approximately 70th place so I haven't put any time into this blog this week. The good news is that the Topcoder algorithm tournament first round was also this weekend, and I advanced to compete again. Next week, I will hopefully have something more interesting to say on my blog.

Friday, March 30, 2007

No, Really. Don't Get Stuck in a Rut.

The TopCoder Marathon Match that just finished was a poker problem. They defined a very simple game of poker and a probabilistic strategy their server would follow. My task was to write a program that played 10,000 hands of poker against a specific strategy and try to win. The problem description was couched in terms of trying to learn the server's strategy.

My solution observed the play of every hand and built tables which calculated the probability of the other player's actions. Using these probabilities, my program chose actions that maximized expected value at every point. When I tested this approach and allowed it to play millions of hands, this worked great. Unfortunately, 10,000 hands was not enough hands to get good enough information about the other players actions in all situations.

I spent the rest of the match trying to figure out what short cuts I could make. Things like, "well I haven't seen this betting pattern exactly, but I've seen similar actions, so I will use what I saw in the similar (but not exact) situation as a predictor of the computer's strategy." The other thing I did was hard code a couple rules. I noticed that it took a long time for my solution to learn to fold the bad hands. Over the course of 10 million hands it would learn to fold about 30% of them. In the first 10,000 hands it only folded about 1% of them. I've played enough poker to know that not folding enough is the mistake of most weak players. To address this, I just hard coded the program to fold the weak hands.

This combination of strategies were enough to let me finish in around 60th place. Since the top 200 advance this round, this is good enough.

So what does this have to do with ruts? Well it turns out that a number of the people who did better than me did not create learning strategies. Instead of trying to figure out how to observe and adapt appropriately, they spent their time coming up with an algorithm that would be good against all strategies. Some of them were able to come up with some very simple strategies which were, nevertheless, able to do better, on average, than my more complicated learning strategy. If we played millions of hands rather than 10,000, mine would almost definitely be better, but that wasn't the problem we had to solve.

I got stuck in the rut of figuring out a learning algorithm, I just never realized it. While it occurred to me to figure out a good fixed strategy, I only thought of that in the context of what my learning program should do until it learned the opponent. Since this felt like just a tweak to my approach, I never got around to doing it.

Moral of the story: sometimes it can be a good idea to try different ideas even if you don't realize you are in a rut.

Monday, March 26, 2007

Don't Get Stuck in a Rut

I was solving a practice TopCoder problem the other day, and was having difficulty. First the problem:

Imagine you are multiplying to positive integers A and B (A >= B) using long multiplication with no carry. For example if A = 89 and B = 76:

8 9
x 7 6
48 54
+ 56 63
56 111 54

The input to your program would be the array {56, 111, 54}. You have to return A, in this case 89. If there are multiple A's that satisfy the input, you have to return the one that minimizes A-B. To make sure the problem is feasable they constrain it by saying that the result of A*B < 1014.

I decided to approach the problem by noticing that the the most significant digit (MSD) of the answer was equal to the MSD of A times the MSD of B. I would try each combination of digits that makes this work, and then move on to the next digit. I also noticed that the number of digits in the answer equals the number of digits of A + number of digits of B - 1. So I tried every combination of lengths of A and B that satisfied these constraints, when picking digits. I then returned the one that worked that satisfied the tie breaking condition (i.e. smallest A that is >= B).

This passed all the examples provided, so I submitted it. Unfortunately it timed out on some of the system tests. Since this was a practice, I had access to the system test and copied it locally. I ran this test locally and let it run for a couple hours. When I came back it still hadn't finished. Given that the program has to solve the problem in less than 2 seconds, this was a wee bit too slow.

I finally gave upI looked at my code and found some obvious optimizations. This improved runtime from hours to 10's of seconds. Better, but still too slow. I worked at it a while longer, but just couldn't figure out how to improve my code enough to get it to run fast enough. So I finally gave up and read TopCoder's solution to the problem.

Their solution - just try every value from 1 to sqrt(N) for B. Set A to be N / B. Check if it satisfies the constraints. You can check 107 choices for B in under 2 seconds, and their bounds guarantee that B won't be larger than that.

This solution is much easier to comprehend and code up than what I came up with. Yet, I didn't see it. I was stuck in a rut. This comes up repeatedly when trying to solve a problem, whether it is a small contrived problem like TopCoder or much larger "real-world" problems. Once you see an approach that looks like it might work, it can blind you to other approaches which might be better. This is really just another example of "Don't Want That", where what you need to stop wanting is your current approach.

How do you get out of this rut? The first thing that has to happen is that you have to notice you are in a rut and be open to the idea that other solutions may exist. Given this, how do you find these other solutions? One of my favorite ways is to present the problem I have to someone else without telling them my current approach. A fresh mind often comes up with a fresh solution. What if you don't have someone to discuss it with or, as in a competition, you can't discuss it with people? Then you have to try and clear your mind of everything you know. Start at the problem from scratch, considering every salient fact, paying particular attention to any that your previous approach ignored. Try to come up with the most off the wall things you can think of and pay attention to the reasons they don't work. If you have the luxury of time, put the problem down and don't think about it for a while.

Unfortunately, sometimes it is hard to describe a problem to a fresh person without sullying it by hinting at our own approach. And I often find that no matter how hard I try to come at a problem from a new angle, my mind just slips back into the old rut. So how do you get unstuck from a rut?

Monday, March 19, 2007

This time it was good enough

My submission that I talked about last week would've placed around 230th place. (top 500 advance). I didn't let it be though. I worked on it further, and ended up finishing right around 100th place. The official results aren't finalized yet, so I don't know my exact place.

So was the effort I put in to raise my result worth it, given that I would've advanced either way? Can I consider it good practice for the next round (which starts Wednesday) where only the top 200 advance?

As you might've figured by these last two posts, I am in TopCoder mode at the moment. This seems to happen once or twice a year during tournament time, so you'll have to bear with me. I am justifying it by saying it fits with the theme of my blog because these are programming contests.

Monday, March 12, 2007

Is It Good Enough?

This week the first round of the Marathon Match for the TCO '07 started. In a "marathon match" they give you a problem and you have a week to come up with the best solution you can. It's usually fairly easy to come up with a working solution, the trick is to come up with a good one. Unlike the TopCoder algorithm matches, there isn't a right answer, rather your program is judged based on the quality of answer it provides. For example, if you were trying to solve the travelling salesman problem, you might be judged based on the total distance your route takes.

I don't need the best programMy dilemma is how much time to spend on this problem. I love these sorts of contests but this week - Wednesday through Wednesday in this case - is fairly busy for me and I don't have a lot of spare time. In this first round 500 people advance to the next round. So I don't need the best program, I just have to be good enough. I've spent a couple hours coming up with and coding a solution and as of the time of this writing, I am in 132nd place. Is this good enough? Will 369 people come up with better solutions than mine in the next 2.5 days? I don't know.

There is something satisfying about the TopCoder algorithm matches where you program has to pass every single input exactly or else you get no credit. It is similar to many school homework assignments where you have well defined requirements to solve. In these cases it is much easier to know when you are done. If it satisfies the requirements, it is done. Unfortunately, in the real world, problems are more like the marathon match problems. Programs aren't right or wrong, rather they are better or worse. You can always improve things if you spend more time.

So how do you know when to release your code? Does your software escape leaving a bloody trail of designers and quality assurance people in its wake? Perhaps it is never finished - it simply stops in interesting places.

Given that there are always other projects to work on, I think this is a very important question to answer, whether at the class level, the component level, or the entire project level. The ability to know what is good enough is one of the differences between companies, teams, and individuals that succeed and those that don't.

Sunday, March 4, 2007

Will I be the next ex-blogger?

Shortly after I started this blog, UserFriendly - a comic I read - had this thread on blogging. I didn't know if I should take it as an omen or not. I decided to be positive and continue with the blog. Now, two weeks in a row, I find that I haven't found the time to write the week's post. Rather than haphazardly dash something off now, I will just settle for this apology, and hope the link to a tech related comic strip amuses some of you.

Sunday, February 25, 2007

Is Decomposition Readable

Last week I claimed that you could make code more readable by decomposing methods into smaller, well-named methods, which are effectively self-documented. This week I am going to challenge that claim.

On the surface, the technique I showed last week looks good. Each individual method is short enough that you can grasp all of its semantics at once. Its easy to see if each method does what it is supposed to do. With such short well-defined methods bugs should be scarce and easy to find, right?

Unfortunately this isn't true. As I found out last week when I was testing the code, there was a bug in it. The bug was in the original version, so at least I hadn't introduced a new bug. However, when I asked myself honestly which version would be easier to debug, I wasn't sure. The problem with such decomposition is that your code is now scattered about. If you want to trace through code you have to now jump around many methods. While each individual method is short, the algorithm as a whole may now be too large to fit on your screen at once. This makes it harder to debug.

subtle details matterThe problem is that decomposition doesn't actually solve one of the problems of programming - subtle details matter. What happens when you choose the smallest or largest element as your pivot? What happens if your split point is the first or last element in the array? Boundary conditions always matter, but when you decompose a problem, now the subunits have to handle the boundary conditions in a consistent manner. So while decomposition can make some bugs easier to avoid, it can add an insidious subtle new bug of inconsistency.

Note that this problem is not limited to decomposing a single function like I did last week. This problem is endemic to component based software whether your components are methods, objects, or entire software products. Even if every component is bug free, the system will still contain bugs if the components have different assumptions. From experience I can say that some of these bugs can be a doozie.

Despite the objections I have just raised, I still think decomposition is a good thing. However, it is no silver bullet and you have to watch out for its pitfalls.

Monday, February 19, 2007

Writing Readable Code

Writing readable - and therefore maintainable and debuggable - code is one of the challenges of the professional programmer. There seems to be two schools of thought on how to do this. One is to insert inline, or even block, comments wherever code might be unclear or where it might be helpful to know what the developer was thinking. The other school says that the code is its own documentation.

While the second attitude seems obnoxious, the first attitude tends to not work in practice. Either comments aren't written ("I'll add them in when I'm done and know I won't change the code") or they restate the obvious (x = 4; // assign 4 to x ) and through their prevelance actually make the code harder to read. I'm not saying good comments can't be helpful, but it is hard to write good comments, and even good comments can become bad if the code changes and the comment doesn't change with it.

The solution is to strive to write code that is easily readable without comments. To demonstrate, I will show some code I randomly found online and I will show how it can be made more readable. Since that will make this post long enough, I will save my analysis on the tradeoffs of my approach for next week.

First, the "before" code:

void sort(int a[], int lo0, int hi0) {
int lo = lo0;
int hi = hi0;
if (lo >= hi) {
int mid = a[(lo + hi) / 2];
while (lo < hi) {
while (lo < hi && a[lo] < mid) {
while (lo < hi && a[hi] >= mid) {
if (lo < hi) {
int T = a[lo];
a[lo] = a[hi];
a[hi] = T;
if (hi < lo) {
int T = hi;
hi = lo;
lo = T;
sort(a, lo0, lo);
sort(a, lo == lo0 ? lo+1 : lo, hi0);

As you can see, it is a sorting algorithm. In particular, it is an implementation of QuickSort. Is it correct? If it wasn't an algorithm you were familiar with it, how long would it take you to figure out what it does. If you found a bug in it, how confident are you that you could fix it without introducing new bugs? These are the problems that come up when coding clever algorithms. Obviously these issues could be partially mitigated with some inline comments explaining the purpose of the various blocks of code. Rather than show that, I will show an alternative, which is to make the comments be the code.

Basically you take each section of the algorithm that you would comment and you make it a method which is named descriptively:

void quickSortRange(int a[], int low, int high) {
if ( ! indicesInRange(low, high) ) {
int pivot = getPivot(a, low, high);
int splitPointIndex = adjustElementsAroundPivot(a, low, high, pivot);
sortSubRanges(a, low, high, splitPointIndex);

When you look at this, it is easy to see what is happening. As long as the sub-methods do their job correctly, you can be confident this works. If it's too complicated to describe what a method does with just a name, the description can go in the method header which is much less disruptive than cluttering up the code in line. Modern IDE's will even display this comment in a context sensitive way for the maintenance developer.

So now to define the sub methods:

boolean indicesInRange(int low, int high) {
return low < high;

int getPivot(int[] a, int low, int high) {
int middleIndex = (low + high) / 2;
return a[middleIndex];

int adjustElementsAroundPivot(int[] a, int low, int high, int pivot) {
while (indicesInRange(low, high) ) {
low = findFirstBadLow(a, low, high, pivot);
high = findFirstBadHigh(a, low, high, pivot);
swapBadElements(a, low, high);
return getSplitPoint(low, high);

void sortSubRanges(int[] a, int low, int high, int splitPointIndex) {
quickSortRange(a, low, splitPointIndex);
int newLow = (low == splitPointIndex) ? splitPointIndex+1 : splitPointIndex;
quickSortRange(a, splitPointIndex+1, high);

As you can see, sub-methods should be implemented with the same idea. i.e. Rather than complicated code, they should call out to their own sub-methods.

int findFirstBadLow(int[] a, int low, int high, int pivot) {
while (indicesInRange(low, high) && isLowOk(a, low, pivot)) {
return low;

int findFirstBadHigh(int[] a, int low, int high, int pivot) {
while (indicesInRange(low, high) && isHighOk(a, high, pivot)) {
return high;

void swapBadElements(int[] a, int low, int high) {
if (indicesInRange(low, high)) {
int temp = a[low];
a[low] = a[high];
a[high] = temp;

int getSplitPoint(int low, int high) {
return Math.min(low, high);

boolean isLowOk(int[] a, int low, int pivot) {
return a[low] < pivot;

boolean isHighOk(int[] a, int high, int pivot) {
return a[high] >= pivot;

When code is written this way, inline comments become much less necessary. If you change how the code works, just make sure you rename methods appropriately.

This has gone on long enough for one post, but I think this example provides plenty of fodder to think about. At least it does for me. Next week, I will analyze this example in further detail including the bug that is in both versions of the code.

Sunday, February 18, 2007

President's Day Delay

Due to the 3 day weekend, there will be a one day delay on this weeks post.

Sunday, February 11, 2007


This is my first post based on a request. Joshua wants a post about hockey to go with the picture on the front page. I will try to tie this into something technical in a bit, but first a bit about my lunch times.

There are a group of guys (there used to be a woman who played too, but she left to work elsewhere) at work who play street hockey just about every noon, whether its below freezing or in the 90s. I had never played any form of hockey before, but it looked like fun and I figured I could use the exercise, so a few months after I started at the lab I joined them. I was completely awful when I first started, but after some months I got good enough to where I wasn't a complete drag on whatever team I played with. I am now one of the regulars and find that playing is a lot of fun and good cardio exercise, though probably not the best thing if you are afraid of pain.

So how does this tie into anything technical, to go with the theme of my blog? Squint your eyes and bear with me.

This past September I was a finalist for the Google Code Jam. The Daily Press found out about it and wrote an article about me. They even sent a photographer out to get pictures of me. The picture you see on the front of my blog ran in the paper next to main part of the story. They also ran a head shot of me on the front page where the story started (yes, slow news day). It was a great boost to my ego and I think its a great action shot - that's why I use that picture on my page. But what I find interesting is what it says about the field of programming.

The article highlighted my hockey playing. Hockey is a sport that is held in such low esteem in the U.S. that when they cancelled the 2004-2005 season I'm not sure anyone noticed. Why is it that golf and bowling and even poker are regularly televised, but never programming? Why is it that whenever Hollywood portrays programmers it in no ways resembles what I or anyone I know does?

It's because programming isn't accessible to the average person. When you watch NFL games, you can imagine tossing a football and probably have even done it at some point. Watching poker tournaments you can compare how they play with how you think you would play. You can identify with these people - even if you don't play poker or football - so you enjoy it. If you've never programmed, its very hard to identify with that, hence why the article highlighted other interests of mine.

I'll admit it can be frustrating to work in a profession that people can't identify with. But I guess it is appropriate given that what we do is make the translation between what people understand - programs like iTunes - and what they don't - the low level logic and commands that make it work. Besides, I can always take my frustration out on the hockey court.

Monday, February 5, 2007

Don't Want That!

When I was making the transition from the procedural oriented programmer I was in college to the much more object oriented programmer I am now, I had many issues. I wish I could remember the specifics, but this one time at my first job I was trying hard to design the software I was working on in a good OO way. I came to a sticking point where I could figure out how I should do the one part of the design. I could see ways forward but they were decidedly non-OO. I went to Chuck, a coworker with years of OO experience, for advice and wisdom. Instead of showing me an OO way out of my corner, he redesigned the entire component on the white board. I left his office very frustrated.

delete the sentenceJust now, I am reading "On Writing Well" by William Zinsser. (yes, I am trying to improve my writing.) There is a paragraph where he tells you the trick for how to fix those problem sentences, the ones that no matter how you rearrange and reword it always sound awkward. His solution - delete the sentence.

Some years ago, when I was less experienced in Java than I am now, I was writing some code where I really wanted function pointers. I was complaining to Chris, who had the office next to me, about Java's lack of function pointers and he pointed out that if I restructured my code with interfaces I wouldn't need C's function pointers.

redefine the surroundingsTime and time again we come across problems which seem insoluble. When this happens, the stubborn individual persists at trying to find a solution while the more faint of heart gives up. Often times the best way forward is actually take a lesson from the faint of heart. Whatever it is you want - don't want that. The solution is to redefine the surroundings of the problem so the trouble area just goes away. That's what Chuck did, that's what William Zinsser says, that's what Chris did, and now that is what I try to do.

Sunday, January 28, 2007

Psychology of Extreme Programming

Converts rabidly follow Extreme Programming not because it is a good design/development methodology (though it has some good idea), but because it is good to developers. XP's methodologies cater to the fact that software engineers are people with foibles and attention spans, just like everyone else. Each of the tenets of XP help keep people motivated, which is key to producing a good product.

Pair Programming

Developers' lives are filled with distractions. When Joe Developer wants a break from programming he is already on a computer with easy access to his email, the news, or his favorite blog. When he is paired with Jane Programmer they both have a reason to keep working. Joe doesn't want to check his email for the 17th time in the past hour while in front of a coworker. Instead he stays interested in his work because the comradeship of the pair satisfies his need for human interaction that email, etc. was acting as a proxy for.

Short iterations

By developing in iterations, John D. and Jane P. always have a concrete near term goal that matters. On traditional long term projects there is an interminable period, between the excitement of the start of the project and the concreteness of shipping at the end, where ennui can set in. By breaking the project up there is always an immediate goal that they can focus on.

Unit Tests

Besides the obvious result of testing the code and hopefully producing a higher quality product, unit tests provide immediate feedback. Just as file transfer indicators, like the silly flying paper on Windows, keep users aware that something is happening, unit tests provide evidence that the program is still growing. Every time Jane P. runs unit tests she gets a sense of gratification that something is happening, even if it is just another bug to fix.

You Ain't Gonna Need It

This principle helps in two ways. First, it reins in the tendency to be an architect astronaut. Second, it keeps Joe D. from being overwhelmed. When Joe starts thinking about how the Tove is going to need to work in the Wabe and that the Borogoves will need to be Mimsy, he gets paralyzed by the daunting complexity and does nothing. However, when he applies the "You Ain't Gonna Need It" principle - o frabjous day - he is able to write beautiful code which later, when he really understands how to make the Tove Slithy, he can.

Final Thoughts

I could go on, but I have already gotten silly enough for one post. The fact is good software is written by good people doing good work. Any methodology that keeps Joe D. and Jane P. motivated and on task will likely be successful.

Sunday, January 21, 2007

Inheritance for Reuse

I am a big believer in reuse as a way to improve software engineering (and please ignore my penchant for rolling my own solution to problems because existing ones don't satisfy me). The reason that cars are relatively affordable and reliable is that the same robots that built your car also built millions of others. So once you have created the factory that builds the Ford Pinto, you can cheaply churn out millions of them and have some confidence that they will behave like the first.
The power of inheritance is that it provides a well defined way to program against abstractions. When you program against abstractions, you decouple your code from its context. Code that can be decoupled from its context can be reused. Reuse is the key to multiplying productivity.

Fixed Library

Software reuse has been around almost as long as software. A classic example is printf. Almost all of structured programming is about writing code that can be used and reused by at least other sections of the same program, if not by other programs. So reuse in and of itself is not novel. However, there are limitations to this type of reuse. In many cases you may want to do almost the same thing as the given library call but slightly differently (fprintf vs. printf). The challenge in writing reusable code is to write code that can support these differences without you having to know them up front, and without the user of the code having to actually edit it themselves.

Runtime Chosen Library

Sometimes when you are writing code you know what you want to do, but not how. When I took driver's ed they didn't know if I was going to be driving a Pinto or a Chevy Nova, but they were able to teach me to drive anyway. A standard example of this in Java is JDBC. You program against the common interfaces from JDBC and at runtime you load an implementation which inherits from this interface. Now you have the ability to replace the JDBC library you are using with little to no coding changes.

Parametrized Libraries

The most complicated, but potentially most powerful, type of library is the type that is configured by the calling program. Imagine you want to use your Pinto assembly line to make Mustangs. It would be efficient if you could just reuse the assembly line, but pass it the robots which are configured to make the Mustang instead. A common Java example of this is the Arrays.sort method. If you want to sort an array of Objects in a non-standard way you can just write a class which implements a Comparator and then take advantage of the built in sort. The Comparator is probably simpler to write and easier to test than your own sorting algorithm, especially if it has been a while since CS102. By using this inheritance the sort method gets to be reused in contexts that the original author knew nothing about.

How Not to Use Inheritance

A common thought among people first learning inheritance is that it is useful for reuse because it lets you get the functionality of the base class in the derived class. I know I thought this at first, and as a result wrote some horrendous code. In general when this is done, it is probably better to aggregate the base class rather than inherit.

Inheritance is Just a Tool

Inheritance is a tool in our software engineering arsenal. It is important to use it wisely, because like any powerful tool it has the ability to reek havoc. When misused inheritance hierarchies can obfuscate code and make it next to impossible to understand what is happening. However, when used well it can simplify code both now and, more importantly, in future maintenance.

Sunday, January 14, 2007

Configuration Files and Revision Control

Do you check your configuration files into your revision control system? What is the standard practice for this? This problem keeps cropping up for me and I have tried different solutions. I don't particularly like any of them, so I thought I would throw this out and hope you tell me how you deal with this situation.

Currently I am using Ruby on Rails for a Data Analysis project. Rails has a standard directory structure which you are supposed to follow. This includes support for separate development and production databases. It seems like what I want to do is check in this entire directory structure and then have one checkout for each developer working on the project and another checkout for the production release.

Here are my problems with this. First, it feels wrong to check passwords into revision control. This just doesn't feel very secure. Second, configuration information can change without the project changing. Mary is using database A for her developing and I am using database B for my developing. If every time Mary does an update she gets my database configurations along with my code changes, either she'll be annoyed because she has to update her config file, or I will be because she didn't and she changes my instance of the database unexpectedly.

Don't check in config filesSo the obvious thing to do is to not check in the configuration files, but there are problems with this too. When Sue gets added to Mary's and my project, she can't just check out the repository and be up and running. She needs to create all of the config files that are missing. I guess this can be part of the same documentation that includes where the repository lives, but this still seems inelegant to me. A bigger issue is deployment. When DataAnalysis version 2.1 is deployed and on Saturday night the server running it has a disk crash it would be great if Dave, who's on call and not involved with the project, could restore DataAnalysis without having to call a developer (particularly me). We, of course, have tagged the project in subversion and there is documentation telling Dave how and where to check out the tagged version, but what about the config file? If he has to actually edit a config file of an inhouse developed application, he may be tempted to call me up just to make sure it is done right. It's bad enough Dave's Saturday night was disrupted, why should mine be? How can we make deployment one step? (Rule 2 of Joel's 12 steps)

OK, so I've rambled long enough. How do you deal with this scenario? Please leave a comment and let me know, I'd love to hear it.

Thursday, January 11, 2007

Blogging Schedule

New technical posts every Monday morning
Talking to people about my blog I have gotten some suggestions. They have ranged from advising me to include what I had for lunch on my blog to the idea of having a regular posting schedule. While people seem to be fascinated by what I eat, or rather by what I don't eat, I don't think my dining habits will become a regular part of my postings. On the other hand, a regular posting schedule seems like a great idea for trying to keep readership, as people will know when to visit my blog and can expect new content.

To that end, I propose the following schedule. I will attempt to post a new technical article every Sunday evening or Monday morning to start out the week. If I have other things to say (like this message), I will post them when the mood strikes me. So if you check out this site once a week, you should be guaranteed of seeing new content every time you visit.

And yes, I am playing around with things that I can do to avoid the "walls of text" that I previously mentioned. Please bear with my experimentation.

Wednesday, January 3, 2007

Writing to be Read

Have you ever had the experience where a user/boss/coworker asks, "How do you do X?" and you answer, "It's explained in that document I sent you." and are met with, "Oh, I haven't read that yet" or "I just skimmed that". While it gives us great stories about the incompetence of our users/bosses/coworkers, it is frustrating and not productive. Here are some thoughts I try to keep in mind when writing to reduce these occurrences:

Just as the best diet is the one you stick with, the best style of writing is the one that is read. Therefore, except for reference material, when choosing between readability and completeness, choose readability. Most of our writing should probably look like we are writing for the web.

So what are other keys to making writing readable? What should I be doing to make my blog more readable?