The Decade of the Brain
It was 1994 and I had a platonic crush on my neuroscience lecturer ... Professor Lee Astheimer.
I was studying biomedical science at the University of Wollongong and in brain science, everything was changing.
By Presidential Proclamation, the Federal Government in the USA designated 1990-2000 as the Decade of the Brain and billions of dollars around the world were invested as other countries joined in the research effort.
As a result, our understanding of how the brain works increased dramatically, and many assumptions had to be re-thought ... especially with regard to how changeable (or plastic) the brain is.

I fell in love with neuroscience ... and it's partner, psychology ... and was inspired by Prof Astheimer to be the best I could be.
Research from this era fundamentally changed many aspects of clinical reasoning in medicine and health care, and it was an exciting time to be alive.
Even at the conclusion of the 'Decade of the Brain', research continued around the world and in 2013, the BRAIN Initiative was announced, and it was estimated 4.5 Billion would be invested over the course of a 10-12 year period.
The Beginning of Evidence-Based Medicine
Back in 1971 ... a year after I was born ... Dr Archie Cochrane wrote a book called:
Effectiveness and Efficiency: Random Reflections on Health Services
Dr Cochrane argued that medicine would be more effective and efficient if interventions were based on randomized control trials, and over time this idea took hold as more and more researchers began asking better questions and conducting better research to find what works in medicine.
Eventually, the International Cochrane Collaboration was developed and now maintains the authoritative Cochrane Library and Database of Systematic Reviews. Dr Cochrane is known as one of the originators of modern clinical epidemiology and evidence-based medicine.
By 1997 I was studying a Master of Health Science at Victoria University and attended a Back Pain Conference with world leading Pain Physician, Professor Nikolai Bogduk.
He. Blew. My. Mind.
His ability to articulate an evidence-based approach to diagnosis and treatment was like nothing I'd heard before ... and certainly disrupted and challenged many peoples views on current practices.
At that time, I made a simple commitment to myself to prioritise evidence over opinion.
This might seem strangely unnecessary ... after-all, I was studying clinical sciences ... wasn't 'evidence over opinion' an already existing commitment?
The answer was 'No, it wasn't'.
The theoretical foundations of many clinical professions was based on opinion, not evidence ... and they were about to get a challenging awakening. I was rather naive at the time, and had no idea of the antagonism I would encounter from others after following through on this decision.
Nevertheless, I stayed the course, and after completing my masters, I did a post graduate degree in Clinical Epidemiology and Evidence Based Medicine, and then another Master of Pain Medicine with Prof Bogduk at the University of Newcastle.

My clinical profession at the time was Osteopathic (Physical) Medicine and I co-founded the International Journal of Osteopathic Medicine (Elsevier) with my good friend and colleague, Robert Moran.
We were also joined by other international researchers who supported our evidence-based agenda.
Our journal went on to become the largest of it's type world-wide, second only to the Journal of the American Osteopathic Association, which had a more general medical focus.
As we introduced the evidence around diagnosis and treatment in physical medicine, the resistance and backlash we received from academics, practitioners and professional bodies was extreme and unfathomable to us.
Anonymous hate-filled emails were received. Whispers and rumours abounded ... and derogatory statements were made to our faces. We had certainly hit a nerve.
Nevertheless, the journal was adopted as the official journal of the Australian Osteopathic Association, the Osteopathic Association of New Zealand and the General Osteopathic Council in the UK, followed by numerous other organisations and associations throughout Europe and North America.
Within just a few years, we were able to get a physical copy of the journal sent to every Osteopath in Australia, New Zealand, the UK, Canada and parts of Europe. Clearly this was before digital publishing had become the norm.

As an aside, I was fascinated to read that Professor Martin Seligman from the University of Pennsylvania described similar resistance when he introduced evidence-based medicine into the profession of psychology, when he was president of the American Psychological Association ...
... and interesting parallel.
The Impact of Evidence-Based Medicine On My World
The impact started with something small and personal.
In 1998 I was a student-clinician, and while I was consulting a patient a tutor questioned my physical findings. This tutor then performed her own physical examination and changed my findings and diagnosis.
She also suggested I didn't have sufficient palpatory skills.
I was dubious of this claim, and so I searched the literature for the reliability and validity of the physical examination procedure in question, and what I found started me on a journey that culminated in my PhD thesis, published in 2012, some 14 years later.
Not only did this physical examination test lack a criterion (Gold) standard, but all studies conducted to date had found that it was unreliable. This means that when two or more 'expert' examiners performed the physical examination on the same group of patients, their lack of agreement made the test clinically useless.
And yet I was being taught, examined, and my skills diminished, on the basis of a test that lacked validity and was shown to be unreliable.
I started a hunt of the literature on diagnosis, and discovered that many of the diagnostic tests taught in the field of physical medicine were unreliable, invalid or of unknown validity.
This was ground-breaking and profession-shaking.
If the diagnostic tests lacked utility and were unable to identify meaningful signs in our patients, then what were the interventions and treatments achieving, and how were we to measure outcomes if the same unreliable and invalid tests were used to measure those outcomes?
This created significant cognitive dissonance in those who were committed to the professional dogma.
Not only did it challenge their professional identity and status, but it challenged their livelihood as well ... or so they perceived.
I was absolutely comfortable with this development.
I had thought for some time that the tests were wishful thinking and relied on far too much subjective interpretation, and so I felt relieved, rather than threatened, at the consistency of the evidence.
And, of course, it didn't stop at diagnosis.
Treatment effectiveness and efficacy in physical medicine were lacking. From spinal injections, to manual therapy and manipulation, ultrasound and invasive surgery ... treatments either didn't work, or only worked marginally.
This made sense to me given that patients weren't receiving meaningful diagnoses in the first place. If the treatments worked, it would be despite the diagnosis, not because of it.
With the proliferation of randomised controlled trials for a given intervention, there had become a need for a systematic way to analyse and summarise the findings of all relevant studies.
Rather than looking at a single clinical trial, we needed to be able to answer the question,
"Based on the available evidence, what can we reasonably say about X?"
This led to the further development of systematic reviews and meta-analysis.
During this period of time, the first clinical guidelines for the treatment of various conditions were produced.
For example, in 1999 the National Institute for Clinical Evidence in the UK (NICE) was founded and released their first clinical guideline on schizophrenia in 2002.
Here in Australia, in 2001, Professor Bogduk was invited by the Federal Health Minster to lead a multidisciplinary team in the development of clinical guidelines for treatment of acute musculoskeletal pain.
And yet during this time, I couldn't help thinking that there was little point performing randomised trials and then summarising that data, if the diagnostic tests used to identify patients for those trials were unreliable, invalid, or of unknown validity.
Of course, other leading researchers had already started methodological work on this.
In 2003, Dr Penny Whiting and others from the University of York published QADAS in the journal BMC Medical Research Methodology ... a tool for the assessment of studies of diagnostic accuracy.
This new tool (QADAS) was focussed on establishing the validity of diagnostic tests, and yet while completing my studies with Professor Bogduk, we saw that there was no universally accepted tool for the quality appraisal of studies of diagnostic reliability.
And so this is what I set out to do for my PhD ... but where?
One of the leading researchers on diagnosis is Professor Leg Irwig, an internationally renowned authority on evidence-based medicine. Prof Irwig headed up the Screening and Test Evaluation Program (STEP) at the Sydney School of Medicine, University of Sydney.
I approached Professor Irwig about developing a tool for diagnostic reliability, and to my delight, he was enthusiastic about the idea. We discussed the scope of my doctoral work and I started in late 2006 with his supervision, as well as Prof Petra Macaskill and Prof Nik Bogduk, and with expert statistical help from Dr Robin Turner.

After 6 years of work and study, I completed my PhD.
The main outcome was the development and testing of a tool called the Quality Appraisal of Studies of Diagnostic Reliability (QAREL), which we published in the Journal of Clinical Epidemiology in 2010.
This tool continues to be used as a methodological guide in the design of diagnostic reliability studies and systematic reviews across the world, and at the time of writing has been cited over 160 times.
On a personal note, it was Nik Bogduk who inspired me in 1998 while I was a grad student, and who challenged me to think critically throughout my studies with him. It was with a distinct sense of purpose, gratification and inevitability that he was a co-author on the papers I published from my PhD.
A Brilliant Time to Grow Up
This 20 year period of time, from 1993 through 2012, was a brilliant time to 'grow up'.
The 'explosion' of brain research and neuroscience along with the development and acceptance of evidence-based medicine created a combination of intellectual stimulation unlike any period before.
And now, with us all facing rapid change once again, it has never been more important to understand ourselves and the world around us, with the aid of critical thinking and high quality information ... gained, where possible, from the right kind of high quality studies.
And finishing this post in the most professional of terms, there's never been a better time to think like a badass.
Thinking like a badass is a skill I've also learned from another mentor, Dr Michael Hewitt-Gleeson, a cognitive scientist who did the first PhD in lateral thinking under the supervision of Professor Edward DeBono from Cambridge University ... but that's an entirely different story.
I am so grateful for the mentorship I have received from such a world class group of thinkers ...