Functional Gibberish

January 21, 2015

My wife used to come home and jokingly complain that I’d sat around all day typing in gibberish into a computer screen. Well, of course it’s gibberish to the uninitiated plebeian. We call it a “language” for a reason. You’ve gotta study and learn the language; you’ve gotta speak it and immerse yourself for the jumble of random characters to magically transform themselves into a document full of meaning and, if you’re really good, perhaps even profundity.

Except that sometimes I stop and stare at the documents I write and think: God, it really does look like gibberish doesn’t it? It doesn’t actually make any sense at all.

I once read, sometime in the last year I think, about how university CS programs have been struggling with a problem particular to their field: namely, that there appears to be two different bell curves associated the student’s aptitude of computer science [1]. In just about every other program there is only ever one bell curve and universities make it their goal to push that bell curve a far to the right as they can. The further right, the more than can charge. Thus aptitude tests play an important role in what school you can get into. But in CS, this whole two-bell-curve thing throws a wrench in it all. You could accept a very smart student that lands in the lower bell curve (failing the course) while a student of average intelligence somehow lands in the upper bell curve and does great.

Of course, this problem has been studied endlessly and has lent support to the notion that perhaps not everyone can actually learn programming. It’s not just that programming is hard, it’s that some people are incapable of understanding it. There’s been a lot of discussion (and a few studies) about what the cause could be. But one paper caught my attention: The camel has two humps. In it, they conclude that the cause of this clear dichotomy of student aptitude has to do with how that student approaches meaninglessness:

Formal logical proofs, and therefore programs – formal logical proofs that particular computations are possible, expressed in a formal system called a programming language – are utterly meaningless. To write a computer program you have to come to terms with this, to accept that whatever you might want the program to mean, the machine will blindly follow its meaningless rules and come to some meaningless conclusion.

Truth is, I haven’t a clue whether they’re actually right or not as to whether this is the cause of the aforementioned dichotomy. I’m inclined to think so. Whenever I’ve tried to explain or teach programming to a non-programmer, I’ve met resistance as my prey tries to recast what I’m saying into something with meaning. They try to understand it as a thing in and of itself rather than a simple set of logical proofs and rules all of us programmers have agreed to follow.

And yet, what my wife said is true: Software programming is gibberish… albeit a very structured, formalized and consistent gibberish. At the end of the day it doesn’t actually mean anything and it doesn’t need to. Frankly, I don’t care if there’s any meaning aside from what I ascribe so long as the damn thing outputs 2 when I put in 1+1 or add(1,1) or sub(3,2) or 5 >> 2.