Software that works

The point in development is to write functions that work, or to return errors to the caller (and thus eventually the user) that describe how to correct the error. The errors need to be clear and complete – a misleading error return can be far worse than the software aborting.

There are in fact three different approaches a function can take: it can work, it can return an error, or it can abort the system.

Obviously we want software to work, and we will spend large amounts of effort describing how to make software that does work. But there are things out of our control; if all disks are full, it will not be possible to save some new data to disk. If the network is offline, we will be unable to send or receive packets.

Software aborting on error is only good for users who can change the software, and only if the software aborts at the point of error, and if the error is something that is correctable in software. This is what assertions are for, but assertions are a two-edged sword, because developers mold their development efforts around having assertions, and yet none of the benefit of assertions can be realized by end-users who lack access to the source or don’t have the ability to fix the source. If used at all, assertions need to be limited to initial development, and completely removed long before the software becomes useful to non-developers. Otherwise, the software will remain in a half-working state.

There is a difference between software that embodies an entire workflow, and software that is a component in a workflow. If the workflow is manual, then aborting and returning errors are nearly the same thing.

Security and usability

Security never trumps usability. Perfectly secure software that is unusable will not be used, and software that is not used is pointless to make.

There is real usability and fake usability. For example, speed is in itself not a usability concern. Making something 10% slower for security is fine, although this may change the feedback needed to give the user, in order to maintain high usability. At some point, speed does become a usability concern, because a usable process has to make progress.

Most of the time, we can keep security without compromising usability, but it does mean changing systems that are affected. One case in mind is that of login, where a username and password are presented. The security side of the picture insists that we treat those as one item, telling the user who made a mistake simply that “there is an error with either your username or your password”. This is bad from the point of usability, because the user thought she had both correct, so she has to now spend dual effort on figuring out which is wrong. This is done on the security side to prevent programs from probing the system to determine which accounts exist, because oftentimes knowing an account name exists can help to target attacks. But this is really a false level of security, because account names are often already known. So instead we should be spending our effort on ensuring that password attacks are hard to carry out, instead of putting a cognitive burden on the user.

This does not mean that usability always wins, because there is a dual to the above statement: perfectly insecure software is unusable, because the use of it is dangerous, and so it will quickly just not be used. And again, software that is not used is pointless to make. This is the overall point that says we should not trump usability with security concerns, instead we need to figure out how to address the security concerns while keeping our high level of usability.

Textadept

Textadept is a fast cross-platform editor written in a combination of C and Lua; the Lua interpreter is embedded in the textadept binary. The GUI version uses GTK 2.0 and the terminal version uses ncurses. On Linux, it expects GTK 2.0 or higher to be available, and the GTK runtime is embedded into the Mac and Windows versions.

I don’t know if I like it. It, like many other editor projects, uses Scintilla (now there’s a successful project if there ever was one). Certainly the project page takes a very large, almost bombastic view, but the editor interface is quirky, almost unique. The default light view is fairly low contrast. The default is for 2-space tabs-as-spaces, and it wasn’t easy for me to figure out how to change it (I had to read a bit of the manual, the Buffer menu is certainly very implementor-focused, not user-focused).

I’ll give it a shot, but I’m not liking it just yet.

Assertions are like training wheels

I’ve been thinking a lot lately about software – how to design and write software that works. One thing that has struck me is that the use of assertions is actually bad in the long run for any code base.

Assertions are like training wheels. Immature code uses them because immature code falls down, a lot at first. Presumably, your code will grow up some day, but if you keep using assertions, that will be harder to achieve. In fact, assertions subtly guide your code away from taking full responsibility and your code will always stay in a non-robust state.

Assertions are not error handling, but despite saying that, the mere fact of having assertions in the code means that there are either logic bugs waiting, or you have error handling masquerading as assertions. And since most people turn assertions off in release code, this means that you also turn off your error handling.

It’s worse than that, because assertions don’t make for good error handling. While aborting is certainly one way to handle an error, and while exception handling reporting (often employed for assertions) makes it a little easier to do a postmortem diagnosis, it’s always better to actually handle errors at run-time, so that things can still work for the user.

The first step is to stop thinking as if you need assertions. You don’t. If you look at most of the places in the code where you use assertions, you’ll see that they fall into two groups – hedges against logic bugs like assert(ptr != NULL), and then bad error handling like assert(*url != 0). In the former, you suspect there might be a bug now, or introduced in the future. In the latter, you know that an empty URL is illegal to pass in.

There are an infinite number of bugs that might exist, and scattering the code with assertions to find them is a hopeless task. You’re trying to prove the negative, which is impossible. Instead, you want to change your development methods so you can prove the positive, i.e. prove that your code is bug-free. This is very hard, but not impossible. Since using assertions to find all bugs is impossible, and since your goal should be to have no bugs, assertions are a dead-end in that regard.

Assertions in fact aren’t error handling, they are error detection. But since our brains aren’t perfect, when we see an assertion on some error state, we think “ok, that’s covered”, and move on, leaving a time-bomb behind that will go off at some point in the future.

Assertions are a viable tool when you are exploring some new space. It’s like using debugger breakpoints, or print statements; you use them to double-check your assumptions, to try things as you are putting them together. But they should quickly come out of the code once you know what you are doing.

Should library code have assertions to help catch errors in new code using solid library code? That’s dubious. Perhaps the only good case I’ve seen of this is the idea that Microsoft had of the “checked build” of Windows, which had the equivalent of assertions, runtime checks that would go off if you passed bad parameters etc. But end-users didn’t use the checked build, it was strictly a testing tool. However, that doesn’t map well to having assertions in library code, because of the negative effect that assertions have on the code around them.