Programmer Superstitions

There are a number of practices that we engage in –– no, that we cling to, and defend, and teach to others,  – that amount to magical thinking, or at best, rational failure.

Green poison

This is often just fine, no harm done (other than to our self-image as rational geeks) but some of these totemic-rituals are stumbling blocks in our ability to produce reliable software.

From time to time we might want to stop and question our most cherished assumptions to see if we’ve fallen into any of these traps:

  • Secrecy and Mystery
  • Ancestor Worship
  • Apophenia
  • Argument From Authority

Secrecy and Mystery: Data Hiding

I’ve been writing in and teaching C++ and C# for twenty years. I know well the iron-clad rule of object -oriented programming that class data should be hidden (private) and accessed through either a property (C#) or an accessor function (C++). Thus:

public class Employee
{
    private string name;
    public string Name
    {
       get { return name;}
       set { name = value;}
    }
}

There are good reasons for this rule. Data hiding makes for a better decoupling of classes, and allows the programmer to intercept the access of private data and apply rules or other processing. It is possible, for example, to check whether the client accessing a value has the correct permissions to see or modify that value, and also to massage the data in appropriate ways.

But look closely at the example shown above: it is not unusual. The backing data is stored as a private local variable, and full access is provided with a get and a set accessor, neither of which do anything but return or set the value. That is, the accessors add no immediate value at all. The typing penalty is solved with automatic properties,

public class Employee
{

    public string Name
    {
       get; set;
    }
}

But to some degree this just hides the problem.  After all, why not write

public class Employee
{
    public string Name;
}

Don’t panic when you see this; consider that there is little difference from the automatic property and it is a heck of a lot more straight forward.

270px-Asch_experiment The last, desperate excuse, as you will find in many computer books, including my own (I add with some chagrin), is that making the backing variable private (or using automatic variables, and changing them when needed) allows you to change how you store the data without breaking any client of your Employee class. You could, for example, decide some time in the future to retrieve the name from a database.

The rational  part of me suspects that the number of person hours wasted, both by all the hocus pocus of properties  is swamped by any possible benefit. And, yet, I can’t quite bring myself to eschew the properties.

The problem is that can’t justify my reluctance rationally. Either this is a superstition, or more likely, it is the Asch conformity effect, in which students were shown three lines,  one distinctly longer than the other, but confederates of the experimenter in the audience insisted that the shorter was longer. If three confederates or more made the false assumption, the subjects were likely to go along more than a third of the time(!)

Ancestor Worship

Let’s take an example where we are not only being irrational, but also making our lives harder and our code more expensive to write and to maintain.

You may want to sit down for this one, but I’m going to dare to ask: why do we insist that C-derived languages (such as C#) continue to be case sensitive?  Other than paying homage to Kernighan and Ritchie I believe I can safely say after 20+ years of writing in C, C++ and C# that the disadvantages of case sensitivity swamp the advantages.

The only clear advantage I have ever found is the ability to have make the name of a property be the PascalCase version of the camelCase name of the backing variable

private int age;
public int Age { get { //... } set { //.... } }

In exchange for that convenience, we enjoy hours of debugging, trying to find where we inadvertently introduced a new variable or method name because of a misplaced shift-key.

And even in the best case, the argument is obsolete,  as the convention now is to use an underscore

private int _age;
public int Age { get { //... } set { //.... } }

Has any bright graduate student done research on the cost/benefit of case sensitivity? Is there any rational reason that in 2010 C# continues this ‘“tradition”’ that was established 30+ ago? Or might it be a lingering fear of showing disrespect to the icons of our industry; the mighty heroes who created the C family, defeated Troy and bequeathed us the scriptures by which we live?

Or Maybe Not…

There is an argument that case sensitivity makes more sense with some human languages other than English, and may even make sense as an optimization for some data structures, such as hash tables. Such arguments, however, speak to the need for an optimizing compiler to handle the issue; there is no reason for the language to do so.

C++ programmers like to suffer anyway, so this just feeds the beast.

Ancestor Worship II

Here’s another example of latent ancestor worship (or at least of very old habits dying hard). There is a wonderful myth that American standard rail road tracks are the width they are (4 feet, 8.5 inches) because that is the way they built them in England, which the English did because that is the way they were gauged by the first tramways, which in turn was done because that is the width of wagon wheels created to fit in the wheel ruts in old English roads that were in turn dug by by Imperial Roman Chariots.

The myth has tremendous lasting power (you can find it all over the net) because it feels right (expectation bias?) . We do that kind of thing a lot; we build the streets of Boston on old cow paths; we unconsciously follow old patterns, even when  those patterns no longer make sense or are necessary.

How many times have you seen (or written) code like this:

for ( int i = 0; i < innerArrayLength; i++ )
{
    for ( int j = 0; j < outerArrayLength; j++ )
    {
        myArray[i][j] = i * j;
    }
}

Why are the counter variables i and j? Old cow paths. It turns out that in Fortran (remember Fortran? Remember Eisenhower?) the integer variables were the letters I through N (which comes from an even older tradition of mathematicians using i to n as subscripts for integers), and, well, we just got into the habit. This one is fairly harmless, a ritualized and vestigial part of the programming mind that we’re surprisingly reluctant to let go of.

Pattern Recognition

One of the most powerful forms of magical thinking is Apophenia: seeing patterns or connections in random data. The tendency towards Apophenia is probably hardwired into the human brain; it is the price we pay for the very advantageous  human ability of pattern recognition (an adaptive part of our intelligence that helps us know when to run and when to hunt) but it can also lead us astray (arguably it is the basis of our belief in many pseudo-sciences).

Apophenia is certainly pervasive in consulting. A classic example was the tendency to study “’excellence’” in successful companies in the 1980s; trying to extract those apparent essential elements that lead to success.

Unfortunately, not only was it far more complex and difficult for other companies to reproduce success following these patterns, even the iconic companies themselves felt the worm turn over time. They kept repeating their patterns, but the outcomes were different. What went wrong?

It was not clear that the patterns of ‘“excellence”’ we were “seeing” in successful companies (great customer service, their caring attention to employees and attention to details) were as easily connected with success as we had thought. Correlation is not always causation, as we so often learn (post hoc ergo propter hoc).

Argument From Authority

Argument from Authority, to disagree with Samuel Johnson, is the true last refuge of scoundrels.

Some years ago I testified as an ‘“expert witness’” in a civil lawsuit, at which the opposing ‘“expert witness”’ asserted that the ‘“failure”’ of the project could be attributed to a lack of strict compliance with the ISO 9000 standard.

Smart people can have a reasonable discussion about whether ISO 9000 will improve the likelihood of success on very large projects (e.g. , the software for the mission to Mars). I personally would not like to work on a software project that is managed using anything like such a bureaucratic, heavyweight, inflexible, document-intensive, rigid processes, but that does not necessarily mean that I can prove that no project would ever benefit from it.

I had no hesitation, however, in asserting under oath, that the fact that various authorities asserted this was the right process for every project was abject nonsense.  I went on, at the arbiter’s insistence, that my personal assessment was that  a project with ten developers would benefit from strict adherence to ISO 9000 like a drowning man would benefit from being thrown an anchor. It was my opinion that knee-jerk reliance on a process like ISO 9000 to guide you through each project is a form of Apophenia; the connection between the pattern of ISO 9000 compliance steps and success, however measured, is imaginary. And, I concluded, supporting that theory with Argument From Authority was, at the least, irrational.

Looking where the light is

There is an old joke about a man searching for his keys under a street lamp. He lost the keys in the alley behind him, but this is where he can see.

In our desperate attempt to gain control over very complex processes, with so much money at stake, and so many examples of previous failures, we often fall victim to seeing apparent patterns (be they processes or otherwise) where they do not exist. We examine various projects and say:

“’Ah ha! I see why this project worked and that one didn’t: the difference was too much/ (too little)  analysis (/ design) (/ documentation) (/ process) (/ oversight) (/ communication).

And all we have to do is increase/  (decrease) the number/ le (length) (/ duration) (/ sequence) (/ complexity) (/ formality) of the meetings (/ documents/ ) (diagrams) (/ studies) (/ sign-offs) , etc.

These false patterns lead us astray; offering us the promise that if we paint by the numbers, we too can be Renoir.  It may be, however, that the variables of successful process are far more complex; including, if we are terribly unlucky, factors over which we have little or no control; or, only marginally better, factors over which we will have no control until our tools and technologies mature.

Or, it may just may be that some developers are better at the ‘“art”’ of programming and shipping product; and that the old adage ‘“’tis a poor carpenter who blames his tools”’ applies to software as well as it does to other crafts.

The Scientific Method

Over the years, at least to some degree, society has given up many (though not all) of its superstitions when presented with more compelling alternatives. One of the most effective techniques for distinguishing between superstition and truth (or some approximation of truth) is the scientific method; in short, controllable, measurable, reproducible effects that are disprovable.

It’s hard to do that sort of thing when you’re trying to hit a deadline, and it’s particularly hard to sort out all the alternatives when there are so few objective comparisons.

When was the last time you were able to find anything like an objective answer to the question “which is better: Java or .NET?” (Please don’t write in, my mailbox fills quickly).

It is particularly interesting that the work done at Universities and Research Centers is often not only unrelated to, but totally disparaged by, the folks who write code for a living. That is not the way things work in other Engineering fields and I’m not convinced we can afford the disconnect for much longer.

We seem to be writing 21st century software with a 12th  century mindset and that can’t be good.

About Jesse Liberty

Jesse Liberty has three decades of experience writing and delivering software projects and is the author of 2 dozen books and a couple dozen online courses. His latest book, Building APIs with .NET will be released early in 2025. Liberty is a Senior SW Engineer for CNH and he was a Senior Technical Evangelist for Microsoft, a Distinguished Software Engineer for AT&T, a VP for Information Services for Citibank and a Software Architect for PBS. He is a Microsoft MVP.
This entry was posted in Essentials, Opinion, Patterns & Skills and tagged , . Bookmark the permalink.

16 Responses to Programmer Superstitions

Comments are closed.