Posts Tagged 'software testing'

Maps and Plans

Every journey goes more smoothly with a map of some sort. Whether it’s a trip to Antarctica or developing firmware for a new microcontroller-based device, it helps to know where you’re going. Without a clear definition of the destination it’s tough to know when you’ve actually arrived. It is also helpful to know what it is, exactly, you expect to find when you do finally arrive. Continue reading ‘Maps and Plans’

Some Fundamental Software Engineering Concepts

I’m still working on Part 2 of the PGM series, so in the meantime I thought I’d toss this up here.

The following is a list of 40 questions I give to people seeking a software engineering position. I haven’t been keeping formal metrics (I should have, in retrospect), but my observations are that most recent college graduates with a BS in computer science cannot answer more than about 37% these correctly. Someone with a Master’s degree might do slightly better, but not by much (about 60% correct). A person with some years of experience as a software engineer will, of course, do pretty well (perhaps as high as 80%). Someone with years of experience as a programmer will typically do only slightly better than the person with the fresh college degree.

Continue reading ‘Some Fundamental Software Engineering Concepts’

Some Thoughts On Software Testing and Software Test Engineering

Software testing is an artform, and, make no mistake about it, a good software test engineer is an artist.

Software testing often gets a bad rap as being “dull”, “boring” or something that the goofy wonk down the hall does, but that the hotshot developers don’t bother themselves with. I used to view testing as an evil necessity as well, until I discovered how challenging it could be.

Continue reading ‘Some Thoughts On Software Testing and Software Test Engineering’

Building Better Device Interfaces

I recently encountered an interesting example how not to implement a control interface for an external device. The devices in question here are two laser products from the same company that do essentially the same thing. The problem is that the software for each of the controller units seems to have been developed in sealed rooms with little or no cross-communication between them. I’m not going to name any names, mainly because other than the command set silliness these are good products, but I’ve suffered enough at the hands of some unknown yahoo (or group of yahoos) that I felt compelled to write up a quick “lessons learned”. I will refer to these two products as controller A and controller B.

Continue reading ‘Building Better Device Interfaces’

The “Software is Simple” Syndrome

In looking back over the years, it seems that I have worked in either the rigorous environments of hard real-time mission critical software, or I’ve worked in scientific research environments. Which strikes me as odd, since the two realms are, in many ways, polar opposites when it comes to how software is designed, implemented and tested. I would have to say that while I like the formalisms of the embedded real-time work, I also enjoy the intellectual challenges of scientific programming. But there’s a price to be paid for that, and I want to share with you my thoughts on that score.

Continue reading ‘The “Software is Simple” Syndrome’

Software Testing (or lack thereof)

“There is always a well-known solution to every human problem–neat, plausible, and wrong.” – H. L. Menken

So, what is software testing? The trite and obvious answer is that testing demonstrates that the software works correctly. But, what, exactly does that mean?

Does it mean that the compiler didn’t find any syntax errors? Or, does it mean that a unit or module generated an expected output for a particular input? In some cases it might mean that someone sat down and tried all the menus and buttons in a GUI according to a set of screenshots and didn’t notice any obvious errors (I’ve actually seen this one passed off as “complete functional testing”!).

The paradigm of software testing can include these things, but they are not the end of the story by any means. If nothing else, these activities do not speak to the real reason why one would want to do software testing in the first place, namely: Does the software do what it was intended to do and does it do it correctly, each time, every time? In order to know the answers with any degree of certainty one must go deep into the logic of the code and examine the dimly lit nooks and crannies for lurking bugs.

What the user sees in the interface, be it a fancy GUI or a simple set of lights in a cockpit, is but the tip of the iceberg. Just flipping a switch and observing that an indicator illuminates doesn’t really say much more than that the switch and the lamp are wired so that current flows. What logic senses the switch? What logic illuminates the lamp? What internal tests and actions are performed between the time the switch gets flipped and the lamp turns on? What happens if conditions are such that the lamp isn’t illuminated, or even worse, if it’s illuminated when it shouldn’t be?

A somewhat more formal definition of software testing might go like this: Software testing is a set of activities performed to insure that the software demonstrates that it meets all of its requirements, handles off-nominal conditions gracefully, and that all of the executable statements in the software are exercised by all possible inputs, both valid and invalid. As a corollary, an important point to always bear in mind was stated succinctly in the famous quote from Edsgar Dykstra: “Testing cannot demonstrate the absence of errors, only their presence.” This puts the onus on the testing to be as thorough as possible, lest a subtle defect go undetected until the user loses a day’s worth of work, or the train doesn’t stop when commanded, or the airliner falls out of the sky.

Software testing is not something one does alone in a dark room with no one looking. It is an integral part of the software life cycle, and deserves every bit as much attention as the requirements, design and coding phases. Unfortunately, in a typical high-pressure marketing-driven software shop things start to drop out of the life-cycle long before the product is delivered, starting with the requirements and design and then the testing, until all that is left is the coding.

Just as there are different levels of requirements (product description, functional requirements, implementation requirements and performance requirements, for example) there are also different levels of testing. Unfortunately I don’t think testing is as well understood as it should be (I also think that requirements are even less well understood, but that’s a different issue). For example, I have noticed over the course of my career that people often confuse unit testing for functional testing, and uttering the phrase “requirements coverage” amongst a typical group of software developers is almost certain to result in more than a few blank stares.

Rather than carry on about something that gets my bloodpressue elevated to begin with, I will instead take the easy way out and refer the interested reader to Wikipedia. Just type in “unit testing”, read what shows up and then follow the links.

Follow Crankycode on

Little Buddy

An awesome little friend

Jordi the Sheltie passed away in 2008 at the ripe old age of 14. He was the most awesome dog I've ever known.