Archive for the 'Processes and Procedures' Category

Building Better Device Interfaces

I recently encountered an interesting example how not to implement a control interface for an external device. The devices in question here are two laser products from the same company that do essentially the same thing. The problem is that the software for each of the controller units seems to have been developed in sealed rooms with little or no cross-communication between them. I’m not going to name any names, mainly because other than the command set silliness these are good products, but I’ve suffered enough at the hands of some unknown yahoo (or group of yahoos) that I felt compelled to write up a quick “lessons learned”. I will refer to these two products as controller A and controller B.

Continue reading ‘Building Better Device Interfaces’


Some Thoughts on Writing Readable Python Code

The Python document “Style Guide for Python Code“, also known as PEP-8, starts off by stating that: “One of Guido‘s key insights is that code is read much more often than it is written.” While I won’t dispute that, I do have some thoughts on reading code in general. I’m a big fan of re-use, and I don’t like to spend any more time reading code than I absolutely have to. I want to get on with it and get the project done. But not everything comes with a set of nicely written man pages or a detailed reference manual, so I do find that I have to actually read the code every now and again. And while I may grumble about the time I spend doing it (and thinking about all the other people doing the same thing because someone couldn’t be bothered to write any documentation), what really makes me cranky is when the code is hard to read to begin with.

Continue reading ‘Some Thoughts on Writing Readable Python Code’

The “Software is Simple” Syndrome

In looking back over the years, it seems that I have worked in either the rigorous environments of hard real-time mission critical software, or I’ve worked in scientific research environments. Which strikes me as odd, since the two realms are, in many ways, polar opposites when it comes to how software is designed, implemented and tested. I would have to say that while I like the formalisms of the embedded real-time work, I also enjoy the intellectual challenges of scientific programming. But there’s a price to be paid for that, and I want to share with you my thoughts on that score.

Continue reading ‘The “Software is Simple” Syndrome’

The Grand Debut of “Software Engineering”

I have a treasured book containing the proceedings of a NATO conference held in 1968 in Garmisch, Germany. It was the first time that the term “Software Engineering” had been used in such a bold fashion. It’s a first edition copy.

Recently I came across a web site that provides both the 1968 and 1969 conference proceedings. It can be found here:

When I read the 1968 proceedings I was flabbergasted by the topics discussed. Most of the book could have been written a few years ago. They had indentified the same issues then that still plague the software development community today: lack of documentation, lack of testing, lack of processes. Amazing.

So, go take a look for yourself. I paid a lot of money for my book (and it was worth every penny). You can get yours for free.

And prepare to be amazed at how little has really changed over 40 years.

Software Engineers and Computer Programmers

The following is from an essay I wrote a while back but never got around to publishing. I suppose it is possible that some folks may take issue with it, but that is not my intent. On the other hand, I’ve been around long enough and I’m now old enough that I find I have little patience left for excessive political correctness, and I don’t believe that everyone should get a gold star just for showing up for work.

I often notice how the terms “software engineer” and “computer programmer” are used interchangibly, as if they were synonyms. I’ve seen resumes for people who were clearly programmers, not software engineers, referring to themselves with the engineering title. And, somewhat humorously, I’ve seen job ads seeking a “software engineer” for web design applications that clearly had nothing at all to do with software engineering.

Continue reading ‘Software Engineers and Computer Programmers’

On the Failure of Code Reuse

It has been my experience that there is a direct one-to-one relationship between the quality of a piece of software and how willingly it will be selected for code reuse. This might seem obvious at first if one uses a simplistic definition of what “quality” means. However, quality is not just how cleverly the code implements a particular algorithm, or how efficiently it executes. Quality is much more than these things.

Quality is also determined by the readability of the code. Can it be easily understood? Does it have documentation to accompany it that explains, clearly and concisely, what it does and how to use it? Quality is a characteristic supported by test results. Are the test results available, along with the test cases and procedures used? If these basic things are not in place, then the odds are that the code won’t be selected for reuse.

The upshot here is that code that is not easily understandable is not easily usable or maintainable. One of the main reason that code reuse fails to live up to its promise is that the candidate code itself is so poorly written and so poorly documented that it is easier to just throw it out and start over than to sit and ponder someone else’s obtuse attempt at cleverness. Poorly written and poorly documented code that is not reusable, when it really should be, suffers from what I call “unintentional obfuscation”, and it is rampant in the software industry.

Unintentional obfuscation is what happens when the person writing the code fails to see the need for following an accepted coding style guide, doesn’t bother to document what they’ve done, and doesn’t have a design document to point to that describes, in some fashion, how the code is structured and what it is supposed to do. If they’ve neglected these things then the odds are good that they also don’t have a test plan, test cases or test results to prove that their code works correctly. In other words, it isn’t quality code.

I believe that competent software engineers and developers do understand the value of reuse, but often find it difficult to justify the level of effort necessary to comprehend and validate code that is difficult to read and poorly documented. Rather than beat their head against a wall tying to decipher (and often debug) crufty stuff from some long-gone programmer, they simply throw it out and start over anew. This is not an unreasonable response given a bad starting point. I have myself taken the clean-sheet option on occasion when confronted with nasty legacy code that would have taken longer to bring back to life than it took to just do it over. But no matter how well justified it may be, starting over is still an unfortunate waste of time and money.

Software Testing (or lack thereof)

“There is always a well-known solution to every human problem–neat, plausible, and wrong.” – H. L. Menken

So, what is software testing? The trite and obvious answer is that testing demonstrates that the software works correctly. But, what, exactly does that mean?

Does it mean that the compiler didn’t find any syntax errors? Or, does it mean that a unit or module generated an expected output for a particular input? In some cases it might mean that someone sat down and tried all the menus and buttons in a GUI according to a set of screenshots and didn’t notice any obvious errors (I’ve actually seen this one passed off as “complete functional testing”!).

The paradigm of software testing can include these things, but they are not the end of the story by any means. If nothing else, these activities do not speak to the real reason why one would want to do software testing in the first place, namely: Does the software do what it was intended to do and does it do it correctly, each time, every time? In order to know the answers with any degree of certainty one must go deep into the logic of the code and examine the dimly lit nooks and crannies for lurking bugs.

What the user sees in the interface, be it a fancy GUI or a simple set of lights in a cockpit, is but the tip of the iceberg. Just flipping a switch and observing that an indicator illuminates doesn’t really say much more than that the switch and the lamp are wired so that current flows. What logic senses the switch? What logic illuminates the lamp? What internal tests and actions are performed between the time the switch gets flipped and the lamp turns on? What happens if conditions are such that the lamp isn’t illuminated, or even worse, if it’s illuminated when it shouldn’t be?

A somewhat more formal definition of software testing might go like this: Software testing is a set of activities performed to insure that the software demonstrates that it meets all of its requirements, handles off-nominal conditions gracefully, and that all of the executable statements in the software are exercised by all possible inputs, both valid and invalid. As a corollary, an important point to always bear in mind was stated succinctly in the famous quote from Edsgar Dykstra: “Testing cannot demonstrate the absence of errors, only their presence.” This puts the onus on the testing to be as thorough as possible, lest a subtle defect go undetected until the user loses a day’s worth of work, or the train doesn’t stop when commanded, or the airliner falls out of the sky.

Software testing is not something one does alone in a dark room with no one looking. It is an integral part of the software life cycle, and deserves every bit as much attention as the requirements, design and coding phases. Unfortunately, in a typical high-pressure marketing-driven software shop things start to drop out of the life-cycle long before the product is delivered, starting with the requirements and design and then the testing, until all that is left is the coding.

Just as there are different levels of requirements (product description, functional requirements, implementation requirements and performance requirements, for example) there are also different levels of testing. Unfortunately I don’t think testing is as well understood as it should be (I also think that requirements are even less well understood, but that’s a different issue). For example, I have noticed over the course of my career that people often confuse unit testing for functional testing, and uttering the phrase “requirements coverage” amongst a typical group of software developers is almost certain to result in more than a few blank stares.

Rather than carry on about something that gets my bloodpressue elevated to begin with, I will instead take the easy way out and refer the interested reader to Wikipedia. Just type in “unit testing”, read what shows up and then follow the links.

Follow Crankycode on

Little Buddy

An awesome little friend

Jordi the Sheltie passed away in 2008 at the ripe old age of 14. He was the most awesome dog I've ever known.