“Man's unfailing capacity to believe what he prefers to be true rather than what the evidence shows to be likely and possible has always astounded me…” " /> “Man's unfailing capacity to believe what he prefers to be true rather than what the evidence shows to be likely and possible has always astounded me…” ">


Software Developer, Chemist

Intellectual Integrity

An vector graphic of a meditative face. The icon is depicted with a scale upon the forehead of the face.
“Man's unfailing capacity to believe what he prefers to be true rather than what the evidence shows to be likely and possible has always astounded me…”

Academician Prokhor Zakharov, “For I Have Tasted The Fruit” Sid Meier's Alpha Centauri

“I don't know, I don't have enough evidence” is a sentence that is beginning to annoy my mother. I say beginning. I mean that she is now tired of it to the point where she has exasperatedly responded “Can you not express an opinion without consulting a study or evidence‽”.

I am sure that I do, in fact, express earnestly held but poorly evidenced opinion at times. I know for sure that I don't know the full reference for every study I've read, which often results in the less than satisfying statement “I read a study once that said…”. That said, I'm pretty good at it (at least good enough to annoy family members), and I still get to have interesting philosophical conversations with people, normally querying their beliefs and political stances — I do not believe that I am presently equipped to have all the answers and people are interesting.

That said, part of why it is interesting is often because, in that 4-year-old kind of way, it is fun to see if you ask “why?” enough at what point people run out not only of answers, but well evidenced answers. This is particularly interesting in academic workplaces, because one tends to be dealing with people who are astoundingly talented leaders in their field. Particularly telling is what happens when one attempts to coax such individuals away from their domain of expertise.

A lot of academics I have these conversations with are actually pretty good at this. A lot more go as far as acknowledging that they are hypothesising. A fair number however, don't seem to realise when they are talking about a subject in which they have not received enough information or evidence to be forming statements. As a rule of thumb, a sharp person can normally catch them out on a logical fallacy before too long into the conversation.

The situation is actually really problematic in Academia, where ostensibly we are responsible for the education of our students. However, very few of us are experts in education. And by very few, I mean that in the last 10 years I have met one-and-a-half academics who qualify. The others can be varying levels of educator — often by their own admission. I have no idea where I sit on the scale — I just try to teach the stuff I know in a way that I think would have been easier to pick it up than the way I was taught (or in the case of computer science, the way I taught myself). But the numer of Academics I have seen try to defend positions of education practice which are unsupported by evidence. The most aggravating argument is often that it has been done that way for hundreds of years. I once, as an angry undergraduate, pointed out to a lecturer that his subject hadn't been around that long. That was diplomatically unwise thing for an undergraduate to do.

Bad enough that this is the case with teaching. We are not only called to be teachers. We are called to be managers (either of undergraduates, postgraduates, or other scientists). During my Ph. D. I wasted a week of a trip to Australia figuring out the internals of a set of contradictory license agreements to figure out whether I was legally permitted to write a piece of software. I often wonder if academics doing these things become as bad about moving from their established positions on those topics as I have seen them become about teaching.

The thing that troubles me most about this is that actually it makes for some deeply flawed scientists. I have witnessed, with increasing irritation, several scientists maintain that the only truth is scientific truth. Missing of course, a very key scientific point, which was most adeptly put by The late George Box, who stated that “All models are wrong; some models are useful”. The fact is, our scientific understanding is no immutable truth. It is useful for making predictions about the observable world. That being said, I would not rule out there being an alternative way of describing the world which is not also good for making predictions it's just not going to be one that is so well established and practiced as the present model.

If my experience is borne out in general — and it might not be — then something key is missing from scientific education that post-doctorate scientists are unaware of the philosophical nature of their field. I myself, only happened upon the notions by a strange circumstance of studying a weird mix of chemistry, statistics and computer science (with a good mix of ontology thrown in). Allowing this kind of key aspect of our work to be a case of pedagogic random-walk seems like a huge oversight. If academic scientists believe that we are arbiters of some fundamental truth beyond the ability to make predictions, then I think that we, in a very real sense, don't know what we are doing.