Monday, April 30, 2012

You Are Not "Your Brain"

Best-selling mysterymonger Deepak Chopra announces Good News: You Are Not Your Brain.
We are not our brains. We are "conscious agents"... It's very good news that you are not your brain, because when your mind finds its true power, the result is healing, inspiration, insight, self-awareness, discovery, curiosity, and quantum leaps in personal growth. The brain is totally incapable of such things. After all, if it is a hard-wired machine, there is no room for sudden leaps and renewed inspiration...
Chopra is saying that you are a conscious agent, with the power of self-awareness and curiosity etc. Which is true. He then says that "The brain is totally incapable of such things", but you are, therefore, you are not your brain.

The problem is that Chopra has a concept of "the brain" which is essentially a passive, "hard-wired machine". He's right that we are not such a machine; his mistake is to call that machine "the brain", because brains aren't like that either.

But this is not a mistake unique to Chopra. As I wrote previously, the concept of "the brain" is inherently misleading -
What do we mean when we talk about "the brain"? Easy, right? It's this (picture of a human  brain). But this is not an image of a brain. It's an image of a dead brain. In a living brain, all kinds of interesting things are happening. Things we literally can't begin to imagine. Because these are hard to visualize, they can't enter the mental picture.

To picture the living brain as just a yellowy lump is like picturing Wikipedia as a disc. It's accurate as far as it goes, but it misses the whole point. You could download Wikipedia onto a BluRay disc, and then you could describe that disc as "Wikipedia" and you wouldn't be wrong, but Wikipedia is much more than a silver circle.
"The brain" brings to mind an inert squishy lump of a certain size and color. This mental image corresponds perfectly well to a dead brain - which is all the proof needed, I think, that it fails to capture the essence of a living one.

"The brain", in other words, is a mere simplified caricature of the brain.

So when Chopra says "You are not your brain", he is right, in the sense that you are not what Chopra (or anyone else) understands by "your brain", but that doesn't mean you're not your brain.

This mistake also crops up in more serious discussions. There are philosophical arguments that go something like this: the human mind can do things that it is inconceivable for a brain to do. Therefore, the mind is not the brain. But couldn't it be that it is the brain, itself, which is inconceivable?

Friday, April 27, 2012

Who Invented Autism?

The concept of "autism" is widely believed to have been first proposed by Leo Kanner in his 1943 article, Autistic Disturbances Of Affective Contact.

But did Kanner steal the idea? That's the question raised in a provocative paper by Nick Chown: ‘History and First Descriptions’ of Autism: A response to Michael Fitzgerald. The piece stems from a debate between Chown and Irish autism expert Michael Fitzgerald, who first made the accusation in a book chapter.

On the evidence presented, I don't think there's good reason to believe that Kanner did "steal" autism, and Chown doesn't seem convinced either. But there's an interesting story here anyway.

Fitzgerald says that in 1938, Hans Asperger - of Asperger's Syndrome fame - gave a series of lectures in Vienna. These were published in a Vienna journal called Wiener Klinischen Wochenzeitschrift as an article called "Das psychisch abnorme kind" ("The mentally abnormal child").

In this article, Asperger put forward the concept of autism. The term was coined by Eugen Bleuler in 1911 in reference to symptoms seen in 'schizophrenia' (he came up with that word too), but that was nothing to do with children.

In 1943, Kanner published his landmark paper, in which he did not mention Asperger. Asperger published his first major description of 'autistic psychopathy' in 1944. The big question, then, is - had Kanner read or heard of Asperger's ideas before 1943?

Asperger was working in Austria while Kanner, although Austrian-born, was in the USA. WW2 would have made it impossible for them to have communicated directly - however, word of Asperger's ideas could have reached Kanner via one of the many European doctors who fled to America, over that period.

There is however no direct evidence that this happened. Fitzgerald makes much of the fact that Kanner opened his 1943 paper by saying "Since 1938, there have come to our attention a number of children..." This could be a reference to Asperger's 1938 work - but Kanner said it referred to his first "diagnosis" of autism, Donald T.

This leaves us with a fluke: two Austrian-born psychiatrists independently discovered the syndrome we now call childhood autism, decided to borrow Bleuler's term "autism" for it, made their first observations in 1938 and first published properly in 1943-1944.

Personally, I think that while that is a remarkable coincidence, such things are not uncommon in science. I see no reason to think that Kanner plagiarized Asperger, although it remains possible. If someone were to discover a copy of Asperger's 1938 article tucked away in one of Kanner's old notebooks, then I'd change my mind, but not before...

ResearchBlogging.orgChown, N. (2012). ‘History and First Descriptions’ of Autism: A response to Michael Fitzgerald Journal of Autism and Developmental Disorders DOI: 10.1007/s10803-012-1529-5

Wednesday, April 25, 2012

Discover The Crux Of Neuroskeptic

I'm very pleased to announce that I've got a post over on Discover Magazine's The Crux blog: Does Brain Scanning Show Just the Tip of the Iceberg?

Regular readers should be able to guess which Neuroskeptic post that's based on... but it's been rewritten and, I think, makes a much clearer case than the original. Comments are off, if you want to comment, follow the link.

Tuesday, April 24, 2012

Bias in Studies of Antidepressants In Autism

There's little evidence that antidepressants are useful in reducing repetitive behaviors in autism - but there is evidence of bias in the published literature. That's according to Carrasco, Volkmar and Bloch in an important report just out in Pediatrics: Pharmacologic Treatment of Repetitive Behaviors in Autism Spectrum Disorders: Evidence of Publication Bias

They looked at all of the published trials examining whether antidepressant drugs (mostly SSRIs, like Prozac) were better than placebo in reducing repetetive behaviours in children or adults with an autism spectrum disorder (ASD).

A meta-analysis showed that there was a statistically significant benefit of the drugs overall, but it was marginal, with a small effect size d=0.22, and it was driven mainly by two old, very small studies that found big benefits. One of them only had 12 subjects. By far the largest study, King et al (2009) with 149 people, showed zero effect.

This plot shows all of the studies, with the red line being no benefit of drug vs placebo. The further to the right of the line, the bigger the benefit, but the grey horizontal lines show the uncertainty. As you can see, two small, messy studies found big effects, the others didn't.

Worse yet, although there were 5 published studies, the authors also found that there had been 5 studies that had been completed, but never published. Carrasco, Volkmar and Bloch wrote to the people in charge of those studies and asked for data; only one out of 5 replied. The data showed no benefit.

We don't know what the other 4 unpublished studies found, but the way science works means they probably came out negative. If we assume that they did, then even the small benefit seen in the published studies disappears.

This paper, incidentally, is great example of why trial registration is a great thing. Without mandatory pre-registration on, no-one would know about the 6 unpublished trials at all. It would have been even better if the researchers had been forced to make public the results, as well as the existence, of the unpublished trials; but it's a lot better than nothing.

Finally, the authors of this paper stress that this doesn't mean antidepressants don't help at all in autism - just that they probably don't help with repetitive behaviors.

ResearchBlogging.orgCarrasco, M., Volkmar, F., and Bloch, M. (2012). Pharmacologic Treatment of Repetitive Behaviors in Autism Spectrum Disorders: Evidence of Publication Bias Pediatrics, 129 (5) DOI: 10.1542/peds.2011-3285

Monday, April 23, 2012

Are Psychologists All Mad?

A fun little study from 2008 looked at rates of self-reported mental illness in mental health professionals: Psychologists' And Social Workers' Self-Descriptions Using DSM-IV Psychopathology

The authors did an anonymous survey of clinical psychologists and social workers in Israel.  They found that
The sample of 128 professionals included 63 psychologists and 65 social workers. The presence of Axis I traits (i.e. mental illness) was reported by 81.2%, the three most frequent traits being mood, obsessive-compulsive disorder, and eating disorder. Axis II traits (personality disorders) were reported by 73.4% of subjects, the three most frequent conditions being narcissistic, avoidant, and obsessive-compulsive personality traits.
Take a look:
There were few differences between the two professions although for what it's worth, social workers were more likely to report psychosis and substance abuse problems, while clinical psychologists were more narcisstic, with a full 40% of them admitting to having narcisstic traits. On that note the authors (perhaps unwisely) comment:
While speculative, it may be suggested that narcissistic traits include some important factors in motivating individuals to choose to enter the mental health care profession. In a psychotherapeutic relationship, the  ability to influence and understand another person's psyche may include features of  "narcissistic gratification".

The problem with all this, though, is that it's not clear what reporting "DSM-IV psychopathology" means; people rated their symptoms on a 5 point scale where 1 = "no evidence of the disorder" and 5 is "greatest severity". Most of the reported symptoms were low in severity, but we don't know what "low" is relative to.

If you said that your narcissism was rated "2" out of 5, you might just mean that you have some narcissistic traits sometimes. That's how I'd interpret the question, anyway.

But in that case, what exactly are you saying? Not much. You're not saying you're very narcissistic. You're not saying you're more narcissistic than average: you might well think that most other people would also score a "2", or even higher. You're not actually saying "I am narcissistic" at all, just admitting that you're not wholly un-narcissistic... and who of us can really say that?

The same goes for all the questions. I think this study would have been much more interesting if they'd just asked people whether, in their clinical judgement, they meet criteria for the disorder. Or whether, if they had to assess a patient who was just like them, what would they diagnose them with? Because criteria are what professionals use on their patients, not 5 point scales.

ResearchBlogging.orgNachshoni, T., Abramovitch, Y., Lerner, V., Assael-Amir, M., Kotler, M., and Strous, R. (2008). Psychologists' And Social Workers' Self-Descriptions Using DSM-IV Psychopathology Psychological Reports, 103 (1), 173-188 DOI: 10.2466/pr0.103.1.173-188

Sunday, April 22, 2012

The Amazing Financial Robot Scam

The BBC reports on an interesting example of a very modern scam: US charges British twins over $1.2m 'stock robot' fraud.

The scam had two parts. For investors, there was the the "stock picking robot" called Marl, which supposedly told you which stocks to buy. You could buy a copy of Marl for $28,000 - or get a newsletter featuring Marl's wisdom, for just $47.

In reality Marl didn't pick anything. The stock tips were provided by the teenage scammers, the Hunters, themselves. Not because they thought they were good stocks, but because the companies behind the stocks paid the Hunters fees for their promotional services via a separate "".

What's interesting about the scheme is that everything "worked", just not the way it was meant to. Investors paid to get tips as to what stocks would rise; they did rise, just not for the reasons they thought.

So Marl was a lot like one of those quack treatments in medicine, that claim to treat a certain disease, and do indeed make people who take it feel better, but - contrary to what they claim - through the placebo effect.

There's other similarities too, as you can find out on the rather fascinating site which helped sell Marl. Like many quack treatments it had:
  • An elaborate 'mechanism of action' that blinds with science - Marl uses an "evolutionary framework"to "Develop what professional traders call a 'sixth sense'" and can "process 1,986,832 mathematical calculations per second."
  • Lots of amazing success stories and testimonials from satisfied customers
  • An attractive creation myth - Marl was invented by "Two Uber Geeks" who both had a record of success in more conventional stock trading, but unlike their conventional colleagues, were able to invent Marl by thinking outside the box; this is reminiscent of the many quacks who simultaneously flout their medical or academic qualifications while accusing medicine and academia of ignoring them.
Overall this is a fascinating story of greed and lies and if you like that sort of thing you'll enjoy surveying the electronic ruins of a classic scam e.g. here and here...

    Thursday, April 19, 2012

    Facial Expressions of Emotion Still Culturally Universal

    Do people from different cultures express emotions differently?

    A new paper says yes: Facial expressions of emotion are not culturally universal. But as far as I can see the data show that at least some of them very much are universal.

    First some background. The authors, Rachael Jack and colleagues of Glasgow, have published before on this theme. Back in 2009 I blogged about one of their previous papers, which showed that East Asians were less accurate than Westerners at categorizing certain emotions.

    But although there were cultural differences in ability to classify some emotions, East Asians still did much better than just guessing. To me, this said that there are fundamental universal emotional expressions, albeit culture can subtly modify them.

    That's my verdict on this study as well.

    The authors adopted a new (and very clever) method this time. Rather than just showing people photos of actors posing expressions, and asking subjects to label them with an emotion, they generated virtual faces using a 3D modelling software, and made the faces display "expressions", with their 41 virtual facial muscle groups.

    Subjects (either white Westerners, or recent East Asian immigrants) saw 4,800 random assortments and had to label each one; the authors could therefore work back to calculate their "mental model" of that emotion based on the set of facial movements that best fit, individually (it reminds me of this method).

    What happened? The Westerners mental models clustered into the classic 6 "basic emotions" of happy, sad, disgust, fear, anger, and surprise. The Asians however didn't; although they were pretty much the same on happy and sad, they were less clear about the other 4.

    But how much less? Take a look:

    Cluster analysis and dissimilarity matrices of the Caucasian and Asian models of facial expressions. In each panel, vertical color coded bars show the k means (k = 6) cluster membership of each model. Each 41-dimensional model (n = 180 per culture) corresponds to the emotion category labelled above (30 models per emotion). The underlying grayscale dissimilarity matrices represent the Euclidean distances between each pair of models, used as inputs to k-means clustering. Note that, in the Caucasian group, the lighter squares along the diagonal indicate higher model similarity within each of the six emotion categories compared with the East Asian models. Correspondingly, k-means cluster analysis shows that the Western Caucasian models form six emotionally homogenous clusters... In contrast, the Asian models show considerable model dissimilarity within each emotion category and overlap between categories.

    This shows that yes, Asians "confused" some emotions more than Westerners but the basic emotional distinctions seemed to be intact, with Happy and Sad especially solid.

    And look at these examples of the "mental model" for one subject of each group: yes they're different, but not very.

    These are fine results, and I think there are real questions over whether the Ekman 6 emotions model really captures the essence of human emotions (especially negative ones.)

    But, especially in the context of previous work from the same authors, I don't think these data really justify the paper's title ("Facial expressions of emotion are not culturally universal"), or the statement that
    Our data directly show that across cultures, emotions are expressed using culture-specific facial signals. Although some basic facial expressions such as fear and disgust originally served as an adaptive function... facial expression signals have since evolved and diversified to serve the primary role of emotion communication during social interaction. As a result, these once biologically hardwired and universal signals have been molded by the diverse social ideologies and practices of the cultural groups who use them for social communication.
    Overall, (ahem) I'm happy to admit that these data show some surprising cultural differences, but I'm afraid that the authors' overblown rhetoric makes me disgusted, sad and angry.

    ResearchBlogging.orgJack, R., Garrod, O., Yu, H., Caldara, R., & Schyns, P. (2012). Facial expressions of emotion are not culturally universal Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1200155109

    Wednesday, April 18, 2012

    Preventing Psychosis?

    Can we prevent psychosis?

    In a major study just published, Early detection and intervention evaluation for people at risk of psychosis, 288 young British adults who were deemed to be 'at risk of psychosis' were randomized to get cognitive therapy (CT) or a control condition. The hope was that it could prevent transition to serious psychotic illness.

    The primary outcome measure was how many of them later went on to get diagnosed with full-blown psychosis. 2 years later, 7% of the CT group and 9% of the controls had, so that's no significant benefit of treatment. CT slightly reduced the level of mild psychotic-like symptoms, but not how much distress they caused.

    So, in other words, no we can't prevent psychosis, not with CT alone at any rate. But there's lots more interesting stuff here...

    Now a transition rate of some 8% over 2 years is lower than in previous studies and might suggest that the concept of the 'psychosis risk syndrome'  or 'at-risk mental state' (under consideration for inclusion in DSM-5) is a bit dodgy. The venerable Prof. Allen Frances thinks so. But he misses the fact that the rate was 18% when you also count the people who went psychotic during the baseline assessments (to be fair to Frances, the authors buried that bombshell quite deep in the Discussion).

    Still, that's still 82% false positives. Is that too high?

    We can't tell, from a study like this. As in any disease screening program, we need to know the relative costs and benefits of true and false 'hits', as well as the percentages of them.

    Here's some food for thought on that note. One of the key tenets of the CT model of psychosis is that 'psychotic' symptoms are a more or less normal response to stress, and that psychosis is maintained by a cycle of thoughts and feelings in which these experiences are themselves a source of concern, because they're felt to be abnormal, pathological, or otherwise threatening, thus leading to more stress, and more symptoms, and hence more concern... and so on. CT aims to break that cycle.

    Check it out (image from here, coauthored by Graham Dunn, senior author of the present work.)

    If you accept that, then it seems that literally the worst possible thing you could say to someone in the 'at risk mental state' is "Watch out! You're at risk of going psychotic!" According to CT, exactly that line of thinking is the root of the whole problem.

    The authors of this paper indeed write that "Key ingredients of the approach [include] a focus on normalising psychotic-like experience". But who deemed them abnormal in the first place? The patient, all by themselves... or some well-meaning professional? It's not clear.

    We are told that the patients were "seeking help for symptoms", but why? Of their own accord, or after someone else raised concerns? 45 people were referred to the study but excluded because they said that they didn't want help. So there was at least some degree of professional 'railroading', driven by the idea that people with such symptoms ought to seek help

    If you accept the CT account of psychosis, then I'd say you ought to think very seriously about whether this whole thing isn't equivalent to giving everyone an X-ray to detect cancers. The X-rays might end up causing more tumours than they find.

    I wonder if the authors of this study considered this.

    Anyway. Keith Laws of LawsNeuroBlog has a good post about the study and the rather overexcited way it's been received in the press (even, er, the BMJ...)
    Despite the authors not being able to make any claims about CT positively affecting transition rates... and the lack of any medication analysis (in fact all patients were unmedicated as an entry requirement) they conclude:
    "On the basis of low transition rates, high responsiveness to simple interventions such as monitoring, a specific effect of cognitive therapy on the severity of psychotic symptoms, and the toxicity associated with antipsychotic drugs, we would suggest that antipsychotics are not delivered as a first line treatment to people meeting the criteria for being in an at risk mental state"
    So the article in the UK Guardian entitled Drugs not best option for people at risk of psychosis, study warns is not simply misunderstanding by a journalist, but what looks like author spinning.... The BMJ press release itself is headlined Cognitive therapy helps reduce severity of distress among psychotic patients - even though the paper (and the press release itself!) clearly states:
    "Cognitive therapy did not significantly affect distress related to these psychotic experiences...nor levels of depression, social anxiety, or satisfaction with life..."

    ResearchBlogging.orgMorrison, A., French, P., Stewart, S., Birchwood, M., Fowler, D., Gumley, A., Jones, P., Bentall, R., Lewis, S., Murray, G., Patterson, P., Brunet, K., Conroy, J., Parker, S., Reilly, T., Byrne, R., Davies, L., and Dunn, G. (2012). Early detection and intervention evaluation for people at risk of psychosis: multisite randomised controlled trial BMJ, 344 (apr05 1) DOI: 10.1136/bmj.e2233

    Sunday, April 15, 2012

    How A Stroke Changed Katherine Sherwood's Art

    In 1997, American artist Katherine Sherwood was 44 when she suffered a major stroke. She writes about her experience and how it changed her work in a fascinating article just out, How a Cerebral Hemorrhage Altered My Art

    All of the images below are examples of her work, taken from the paper.

    Sherwood writes that she had long been interested in the brain. She incorporated neuroscience themes into her work even before the stroke. Here's a 1990 piece:

    Then, out of the blue, her life was changed:
    The next May I experienced a cerebral hemorrhage affecting the parietal lobe of the dominant hemisphere [i.e. the left side of the brain, which controls the right side of the body]. I lost my ability to walk, talk, read, and think as my right side became paralyzed within the course of 2 min. It happened during a graduate student’s critique... I do not recall saying this but one of my colleagues reported that the last thing I said was “Oh no, not again.” I was referring to the death of my father at age 33 from an aneurysm. This was when my life caught up to my art...
    Six months later after my brain had absorbed my spilled blood I had a cerebral angiogram. Relieved that it was over and the possible second stroke had not occurred, I sat up on the gurney and looked at the computer screen in the corner of the room. The images of the arterial system of my brain both stunned and reminded me of the Southern Song Dynasty Chinese landscape paintings that I had deeply admired. I immediately said without thinking, “I need those images.” The room broke out in laughter which I still do not understand. I repeated, “No, I am an artist and I really need those images.”
    Sherwood never regained the use of her right hand. She had previously relied on her right hand to paint with, and she was forced to learn to use her left, and this led to changes in her style.

    To compensate for the loss of fine dexterity from using her off hand, she started to paint on larger canvasses, using different materials and a "freer" approach.

    As a neuroscientist, the main question at the back of my mind reading this was, did damage to her left parietal lobe have a "direct" effect on her mind and personality which altered her artistic process, beyond making her use her left hand etc? Sherwood writes that she's just not sure:
    [some writers] proposed that my new success came from changes in my brain, particularly in the disruption of “the interpreter.” My artist friends vehemently disagreed with this assessment, preferring to believe it had something to do with the 20-years of painting I had done before my cerebral hemorrhage and my ample time to paint while I was recovering. I leave it up to mystery, a category that drives my doctors crazy.

    Link: On a slightly different note see the Neurocritic's Suffering For Art Is Still Suffering

    ResearchBlogging.orgSherwood, K. (2012). How a Cerebral Hemorrhage Altered My Art Frontiers in Human Neuroscience, 6 DOI: 10.3389/fnhum.2012.00055

    Saturday, April 14, 2012

    Fixing Science - Systems and Politics

    There is increasing concern that the structure of modern science is flawed and that most published research findings may be false.
    Commonly cited problems with how science works today include:
    • Publication bias and the file drawer problem.
    • "Result fishing", data dredging etc. - analyzing data in different ways to "get a finding"
    • The privileging of "positive" results over "negative" ones.
    I have previously argued that, to solve these, problems we need a way to ensure that scientists publicly announce which studies they are going to run, what methods they will use, and how they will analyze the data, before running their studies.

    We already have such a registration system in place for clinical trials. It's a good system. It's not perfect but it's helped. I propose we extend it to all science. But how would that work in practice?

    I'm not sure. So what follows is a series of ideas. These are intended to spark debate.

    Here are some options for systems:
      1. There could be a central registry, free and open to the public, where protocols are pre-registered. Call this the ' option' because we already have one for clinical trials. This registry could also serve as a repository of results and raw data, but it wouldn't have to.
      2. Academic journals could require studies to be pre-registered in order to be considered for publication: you submit the Introduction and Methods, these are peer reviewed, and if accepted, the journal is bound to publish the results when they arrive; the authors for their part are bound to follow their protocol (secondary analyses could take place, but they would be explicitly flagged as such.) and submit the results.
      3. Scientific funding bodies could make all successful scientific grant applications public via an open database. These applications already contain pre-specified methods, hypotheses, and statistical analyses, in most cases; part of this plan could be to make these more detailed.
      4. Authors could have the individual responsibility to publicly announce their methods, hypotheses and plans before starting studies on their own websites.
      How can we actually make this happen? That's a question of politics:
      1. Governments could introduce legislation to force this. This is the most extreme option. It is probably unviable, because it would place researchers in different jurisdictions under different rules. Science is a global enterprise, and we don't have a global legislature. (The USA did this for clinical trials, but for various reasons these are a special case and more 'international' than others.)
      2. A consortium of major scientific journal editors could announce that they'll only publish research that complies with the system. Notably, this was how clinical trial registration started.
      3. A consortium of major funding bodies could refuse to finance research that doesn't adhere to the system.
      4. Individual scientists, journals, and funding bodies could unilaterally adopt the system. This would, at least at first, place these adopters at an objective disadvantage. However, by voluntarily accepting such a disadvantage, it might be hoped that such actors would gain acclaim as more trustworthy than non-adopters.
      My own preference would be for System 1 via a combination of Politics 2 and 3. Yet any combination of these options would be better than the current system.

      Some possible objections:
      1. Pre-registration of all science would be impractical. What about pilot studies and 'tinkering'? - I'm only proposing that any research which might be published, should be publicly registered. This leaves anyone free to tinker away all they like - in private. We just need to be clear, from the outset, whether we're tinkering or doing 'proper' publishable research, a line which is currently very murky.
      2. Many interesting results are unexpected. Post-hoc analysis or interpretation of data is important. - There's nothing wrong with post-hoc analysis or interpretation, so long as everyone knows it was post-hoc. The problem is when it is passed off as being a priori. Registration doesn't seem to have discouraged legitimate post-hoc analyses in the case of clinical trials: there are lots of excellent post-hoc analyses coming out, clearly labelled as such.
      3. It would be unfair to scientists to make them 'tip off' their rivals about what they're working on in advance. It would penalize originality. - My gut instinct here is that this is not a big problem; everyone would be in the same boat so it would be a fair system. However, if this were felt to be a concern, there's an easy solution - just build in a delay to the publication of registered protocols. Put them in a 'sealed envelope' to be opened after a 12- or 24- month 'grace period', and that would give people a head start while ensuring that their original protocol was eventually revealed.
      4. This wouldn't solve all of the other problems with science. - No, it wouldn't, and it's not intended to. However, I do feel that we'll struggle to make progress in other areas without something like this happening. The current system of post-results publication is not the only problem, but it is a large part of it.
      On that note, here's a sketch of how I see this relates (or not) to some other issues in science today:

      Replication - there's been much discussion of late around ensuring the replicability of results in certain fields e.g. neuroimaging studies and psychology too. My view is that most published false (i.e. unreplicable) findings are a product of publication bias and positive result fishing. Solving those problems, as outlined here, would increase the replicability of science. It wouldn't be a panacea. There will always be dodgy results due to fraud, incompetence, and bad luck, but the current system too often rewards scientists for fiddling around until they get a positive one.

      Careers - There is a widespread complaint that the current system of science is unsatisfactory. Our jobs, promotions, funding and tenure depend on our ability to generate high impact papers - which means, in effect, novel and interesting positive results. Pre-registration of science would change the game. Scientists would be judged on their ability to design and run interesting experiments, rather than on their ability to generate 'good papers'.

      Open Access - The issue of free open access to scientific papers is an important one. It's a separate question to the one I've discussed here. But I see a spiritual overlap. In both cases, the  fundamental question is who owns science? At the moment, scientists own their work until and unless they decide to publish parts of it. When they do, they sell it to a publisher, who sells it to the world. In my view, the world should be told about science, from the beginning.


      Fundamentally, this will only happen if a critical mass of scientists want it to happen. It will not be easy, but whereas four years ago I was, deep down, skeptical that it would ever be possible, today I really think it might.

      Already we're seeing signs of hope, from informal pre-registration  to calls for preregistration in particular fields in major journals. 10 years ago, the idea was being written off as impractical, and with the technology available at the time, it probably was. I do not think that is true today.

      Change can happen. All it needs is will.

      Wednesday, April 11, 2012

      Psychology vs Astrology

      Are personality tests any more accurate than astrology?

      A lovely study I just came across examined this question: Science Versus the Stars. The researchers took 52 college students and got them to complete a standard NEO personality questionnaire. They also had to state the date, time and place of their birth.

      Three weeks later, the participants were then given two personality summaries - one based on the personality tests, and one on their astrological chart generated with a computer program.

      The trick was that everyone also got a pair of bogus summaries, one of each kind. These were simply someone else's results, picked at random from the other 51 volunteers. They weren't told which were the fakes and which were real - they had to work it out, based on which one matched them best.

      The results showed that the subjects were no better than guessing when trying to tell which of the two astrology charts was theirs. They were able to pick their own personality scores better than chance, although only 80% of them got it right, and guesswork gets you to 50% - so this is not all that impressive. Psychology beat astrology, but hardly by a landslide.

      This study is a modern update of Shawn Carlson's classic 1985 Nature paper, A double-blind test of astrology. In Carlson's experiment, though, people weren't even able to accurately pick out their own personality scores.

      When asked to say which of the four reports was the best match overall match to their personality, 55% of the participants picked their own real personality one - but no fewer than 35% preferred one of the astrology charts, and 10% went for someone else's personality scores. Hmm.

      The authors say
      the present results represent less of an endorsement of psychological measures than a further indictment of astrology.
      but I think it's interesting that even under very favorable conditions (only one fake personality test), people were well short of perfect accuracy at spotting their own psychological scores - which they had themselves produced by filling out a questionnaire, just weeks before. Whether that tells us more about the NEO test, the participants' memory, or the fact that all the students at Conneticut College are pretty much the same, I'll leave it for you to judge...

      ResearchBlogging.orgWyman, A., and Vyse, S. (2008). Science Versus the Stars: A Double-Blind Test of the Validity of the NEO Five-Factor Inventory and Computer-Generated Astrological Natal Charts The Journal of General Psychology, 135 (3), 287-300 DOI: 10.3200/GENP.135.3.287-300

      Tuesday, April 10, 2012

      Homosexuals Are Smart?

      Evolutionary psychologist Satoshi Kanazawa has never been far from controversy. When he's not having his blog cancelled for saying black women are unattractive, he's arguing that some nations just aren't smart enough to be monogamous.

      Given which, his latest work, saying that gay people are smarter on average, is probably his most politically correct paper in years, strange as that may sound.

      In three large population surveys (USA's AddHealth and GSS, UK's NCDS), Kanazawa found a small positive correlation between estimated IQ and self-reported homosexual behaviour or identity.

      Now I'm not sure what to make of this. He controlled for confounds such as race, religion and political orientation (and those correlations are interesting in themselves), but you can never measure and correct for everything in a study like this.

      Kanazawa interprets all this in terms of the Savanna hypothesis, essentially the idea that intelligence allows us to transcend our evolutionary programming (according to which we ought to all be straight, amongst many other things) -
      The Savanna-IQ Interaction Hypothesis (Kanazawa, 2010a), implies that the human brain’s difficulty with evolutionarily novel stimuli may interact with general intelligence, such that more intelligent individuals have less difficulty with [evolutionarily novel] stimuli than less intelligent individuals...
      Evolutionarily novel entities that more intelligent individuals are better able to comprehend and deal with may include ideas and lifestyles that form the basis of their preferences and values; it would be difficult for individuals to prefer or value something that they cannot truly comprehend...
      However, it could be that in America and the UK today, smarter people tend to end up in the kind of social circles where being gay is (for whatever reason) more acceptable.

      My main problem with this is that the effects are very small. For example, in the AddHealth study, IQ in childhood was correlated with later adult sexual identity with a coefficent of 0.013... but the association of homosexuality with political attitude (liberalism) of 0.613, 60 times as high. (Edit: But these are unstandardized regression coefficients, so they cannot be directly compared. The coefficients represent the change in homosexuality per 'point' change in the other variable, but IQ varies more than political orientation, because political attitudes were measured on a 5 point scale in AddHealth but IQ has a mean of 100 points. Multiplying by the SD of the variables in question, which is one way to correct for this, gives a coefficient for IQ of 0.202 and for political attitude is 0.468, so intelligence is not as far behind politics as as I thought. Thanks to a reader for pointing that out and apologies for the error.)

      The Savanna hypothesis is all very well, but does it predict such small effects? Isn't there a point where very weak evidence in favor of a theory actually becomes evidence against it...?

      ResearchBlogging.orgKanazawa, S. (2012). Intelligence and Homosexuality Journal of Biosocial Science, 1-29 DOI: 10.1017/S0021932011000769

      Sunday, April 8, 2012

      Bigender - Boy Today, Girl Tomorrow?

      An interesting report in (believe it or not) Medical Hypotheses - Alternating gender incongruity: A new neuropsychiatric syndrome providing insight into the dynamic plasticity of brain-sex.

      Bigender individuals report alternating between male, female, and (sometimes) mixed gender states. Case and Ramachandran - that's V.S. Ramachandran of phantom limb fame - write:
      Under the transgender umbrella, a distinct subset of "Bigender" individuals report blending or alternating gender states. It came to our attention that many (perhaps most) bigender individuals experience involuntary alternation between male and female states, or between male, female, and additional androgynous or othergendered identities ("Multigender")...
      But almost no-one's studied the bigender phenomenon -
      A survey of the transgender community by the San Francisco Department of Public Health found that about 3% of genetic males and 8% of genetically female transgendered individuals identified as bigender. To our knowledge, however, no scientific literature has attempted to explain or even describe bigenderism; a search of PsychInfo and PubMed databases returned zero results... the study of this condition could prove illuminating to scientific understanding of gender, body representation, and the nature of self.
      No scholarly paper would be complete without some elaborate new jargon, of course -
      For the purposes of our research we are calling this condition "alternating gender incongruity" (AGI). We seek to establish AGI as a nosological entity based in an understanding of dynamic brain representations of gender and sex.
      So they designed a survey (details in the paper) and sent it to members of a bigender internet forum. The forum had 600 members, although many were lurkers; they got a total of 39 replies. So it's a highly self-selected sample, then, but that's inevitable I think. Here's what they had to say -
      Of the 32 alternating bigender respondents included [some were excluded for diagnoses of DID etc], 11 were anatomically female (identified as female at birth)... One respondent identified as intersex, but only for reasons of androgynous facial appearance...

      10/32 respondents agreed that their gender switches were "predictable." The period of gender switches was highly variable, ranging from multiple times per day to several times per year. A majority (23/32) of respondents, however, reported that their gender switched at least weekly [with 14 saying it switched at least once per day].
      What are the switches like? Some respondents are quoted -
      "I still have the same values and beliefs, but a change in gender is really a change in the filter through which I interact with the world and through which it interacts with me."

      "My voice usually ends up being higher than other times, I’ll be more emotional, my views on things like politics tend not to change, but how I react to certain things does. Like if I’m in male mode and I see someone crying I’ll think more along the lines of, 'Man up...' while if I’m in girl mode I’ll think more along the lines of ‘Oh sweety!’"
      This being Ramachandran, the paper also touches on left handedness, brain hemispheres, phantom genitals and more, but it's fair to say that all this is pretty speculative -
      In myth, art, and tradition throughout the world the left side of the body (and hand) – and therefore the right hemisphere – is regarded as more "feminine" – intuitive and artistic. One wonders therefore whether gender alternation may reflect alternation of control of the two hemispheres. Such alternation is seen to a limited extent even in normal individuals but may be exaggerated (and more directly involve the gender aspect) in AGI...
      Personally, what I find most interesting about this is the question of what would have happened to 'bigender' people before the term 'bigender' came along; it seems to be newer, and certainly less widely used, than 'transgender'/'transsexual'.

      Would they have been identified as transgender? Maybe... but maybe not. Would they have had any label at all?

      ResearchBlogging.orgCase, L., and Ramachandran, V. (2012). Alternating gender incongruity: A new neuropsychiatric syndrome providing insight into the dynamic plasticity of brain-sex Medical Hypotheses, 78 (5), 626-631 DOI: 10.1016/j.mehy.2012.01.041

      Friday, April 6, 2012

      Neurostimulation - The Genius Machine?

      Do you wish you were smarter? Are you often baffled by puzzles?

      According to Australian neuroscientists Chi and Snyder, all you need is a bit of electric assistance: Brain stimulation enables the solution of an inherently difficult problem.

      In their study, 22 volunteers were faced with the 9 dots problem, a notoriously difficult puzzle. The goal here is to draw exactly four straight lines connecting all nine of these dots, without retracing any line, or lifting your pen from the page.

      Can you do it?
      If not, don't worry; not many people can. None of Chi and Snyder's 22 subjects did it in the 3 minutes before the stimulation was turned on.

      But 5 of the 11 volunteers later managed to do it, after 5 minutes of transcranial direct current stimulation (tDCS), a simple form of neurostimulation in which a weak electric current is passed through the head via electrodes attached to either side. The "L− R+" current was designed to boost the right temporal lobe while inhibiting the left, on the hypothesis that the right side of the brain helps us "think outside the box" (literally.)

      None of the 11 volunteers in the placebo control group succeeded. They were given tDCS but after 30 seconds, it was gradually turned off; this is intended to produce the same tingling sensations as real tDCS, but without affecting the brain. That's statistically significant (p=0.018, one tail fisher’s exact test), although the numbers are small.

      The authors also refer to other unpublished data:
      We would like to emphasize the robustness of our finding. The finding that tDCS enabled more than 40% of participants to solve the ‘unsolvable’ nine-dot problem is consistent with our pilot study (see Section 2), which shows that whereas no one solved the nine-dot problem in the sham stimulation condition, 3 out of 7 participants in the L−R+ stimulation condition did so after stimulation. It is also strongly supported by subsequent studies where we, for curiosity, included the nine-dot problem at the end of an unrelated experiment.
      In fact, of all the data we have ever collected by 2 different experimenters over eight months, we found that 0 out of 29 participants in the sham stimulation condition solved the nine-dots problem, whereas 14 out of 33 participants (naïve to the problem) in the L−R+ stimulation condition did so. The probability that by chance 14 out of 33 participants solved the problem is less than 1 in a billion, according to analysis using binominal distribution (assuming that the expected solution rate without stimulation is 5%).
      Hmm. When these guys published similar tDCS results with a different puzzle last year, not everyone was convinced. Critics said that 'Thinking caps' are pseudoscience masquerading as neuroscience:
      Chi and Snyder's participants solved maths puzzles that the researchers claim required "insight", yet crucially the subjects did not perform any other tasks to show that only puzzles requiring "insight" were influenced by the brain stimulation...
      Rather than encouraging novel thinking, maybe brain stimulation made participants less cautious in reaching a decision, or maybe it helped them recall a similar problem seen a few minutes earlier, or maybe it made them temporarily less distractable (or even dulled their hearing), or maybe it boosted general alertness.

      The point is that without appropriate experimental controls, the results are virtually meaningless...
      Personally I don't think this is all that concerning because the 9 dots problem is really hard and I find it implausible that general alertness would help much; in this study, Chi and Snyder did give people a mental arithmetic task as well to try to control for such non-specific effects. Everyone did it three times - before, during, and after stimulation. However curiously, while they present mental arithmetic data for before and after (showing no effect of tDCS), they don't mention what happened during stimulation. Hmm.

      More worrying is that although the study was placebo controlled, the authors don't say whether subjects were randomly assigned to active or placebo tDCS; if not, that's a major flaw. And although the subjects weren't told which group they were in, at least one of the experimenters must have known because someone was manually controlling the tDCS current switch. Were they in the same room as the subject? Could they have, unconsciously, been influencing them?

      I'll be skeptical until we get some independent replication.

      Here's the 9 dots solution, for those of you who didn't have a tDCS machine handy...

      ResearchBlogging.orgChi, R., and Snyder, A. (2012). Brain stimulation enables the solution of an inherently difficult problem Neuroscience Letters DOI: 10.1016/j.neulet.2012.03.012

      Wednesday, April 4, 2012

      Co-Vary Or Die

      I've just come across a striking example of why correcting for confounding variables in statistics might not sound exciting, but can be a matter of life and death.

      Imagine you're a doctor or researcher working with HIV/AIDS. You're taking a sample of blood from a HIV+ patient when you slip and, to your horror, jab yourself with a bloodied needle. What do you do?

      In a 1997 study, researchers Cardo et al studied hundreds of cases of this kind of accidental HIV exposure ("needlestick injuries") in medical and scientific workers. They wanted to find differences between the people who contracted the virus, and the ones who didn't.

      One factor they considered was post-exposure prophylaxis - taking HIV drugs as soon as possible after a suspected exposure. Now these drugs were still pretty new in 1997, and it wasn't clear how well they prevented infection, as opposed to just delaying symptoms. Many people with needlestick injuries were offered a course of drugs - but did they work?

      Cardo et al's raw data found no significant benefit
      By univariate analysis, there was no significant difference between case patients and controls in the use of zidovudine [AZT, the first HIV drug] after exposure.
      But it turned out that this was due to confounding variables. When they corrected for other factors...
      Infected case patients were significantly less likely to have taken zidovudine than uninfected controls (odds ratio 0.19, P=0.003). This is a classic example of confounding, since the adjusted odds ratio differed from the crude odds ratio (0.7) because zidovudine use was more likely among both case patients and controls after exposure characterized by one or more of the four risk factors in the model.
      So while people who took zidovudine were just as likely to catch HIV than ones who didn't, they were also more severely exposed to the virus i.e. by being exposed to a greater quantity of blood, or a deeper wound. People were more likely to decide to take it after severe exposures. Zidovudine actually dramatically reduced the risk.

      Post-exposure prophylaxis has since become standard procedure and it has undoubtedly saved many lives since. Without statistical correction, it might have taken longer for people to see the benefits.

      In summary, I guess what I'm saying is, remember to correct for confounds - or die.

      ResearchBlogging.orgCardo DM, Culver DH, Ciesielski CA, Srivastava PU, Marcus R, Abiteboul D, Heptonstall J, Ippolito G, Lot F, McKibben PS, and Bell DM (1997). A case-control study of HIV seroconversion in health care workers after percutaneous exposure. Centers for Disease Control and Prevention Needlestick Surveillance Group. The New England journal of medicine, 337 (21), 1485-90 PMID: 9366579

      Monday, April 2, 2012

      When Prophecy Failed

      I've just been reading the classic psychology book When Prophecy Fails.

      Published in 1956, it tells the inside story of a group that believed the world was about to end - and what they did when it didn't. Here's a good summary over at Providentia.

      The investigators, led by social psychologist Leon Festinger, infiltrated a small group (too amateurish to be called a 'cult' - see below) surrounding a Chicago woman called Dorothy Martin, or "Marian Keetch" as they dubbed her to protect her identity.

      Martin, a classic 50s housewife, had a long-standing interest in the occult and dianetics. One day, she woke up with a strange sensation in her arm, and soon decided that she was receiving messages from spiritually advanced extraterrestrials by 'automatic writing'.

      After several months of rather generic religious guidance, the aliens informed her that a flood would destroy Chicago, and much of the US, on the 21st December 1954. This was part of a cosmic plan to "cleanse" the earth. She, and a number of other believers, would be evacuated by UFOs shortly before the calamity.

      Festinger and co learned of the group through a newspaper ad warning of impending doom; spying a  chance to field-test his ideas, Festinger assembled a crack team of sociology and psychology students to go undercover. Considering that the group only had perhaps 10 real core members, plus another 20 or so less committed sympathizers, the fact that no fewer than 4 investigators became involved is rather remarkable.

      When the 21st dawned and Chicago remained, the core members of the group were upset, but rationalized the failure - the spacemen had called it off, because of the positivity shown by the group. In the days following the non-event, the previously secretive group became eager to spread the word. The media picked up the story a few days before the 21st, but the group refused interviews and actively avoided trying to convert people. Afterwards, that all changed. But the group broke up shortly afterwards.

      Festinger et al's slant on this was that it supported their cognitive dissonance theory; essentially, having to face up to the fact that they'd been wrong would have been painful, so instead they chose to believe that they'd been fundamentally right all along, and sought confirmation for this by trying to get more members. They make much of the fact that those individuals who'd made more concrete commitments to the group (e.g. by selling their possessions or losing their jobs) were subsequently more faithful.

      I wasn't convinced by this, though. Apart from the fact that it's just an isolated case, the group did, after all, break up, just a few weeks after the prophecy failed. While Martin herself seemed genuinely unfazed (and went on to lead a long life in much the same paranormal vein), there's little evidence that the rest remained believers for more than a few days, even the most committed.

      When Prophecy Fails is an amazing human interest story, though. The group is just adorably naive and homely. It's all charmingly 1950s and about as far from the deadly fanaticism of the 1990s Heaven's Gate group as you can imagine.

      It's full of details like the spirit of Jesus solemnly telling the group to take a break for coffee; the declaration that some new mountains formed following the rearrangement of North America would be called the "Argone range" (in honour of the fact that the Rockies etc. "are gone"); and the high school pranksters who phoned the group and announced that they have "a flood in their bathroom, do you want to come over and see it?" - they did.

      Indeed, I couldn't help feeling that the least savory thing about this group was the investigators themselves.  Festinger et al notably don't discuss the ethics of their study at all, unlike Stanley Milgram in his classic work from the same era.

      Was it ethical? At least some of the investigators actively lied to gain entrance to the group, by making up stories of their own 'paranormal' experiences. Other than that, the observers seemed scrupulously careful not to encourage the group in their beliefs - but the very fact that they were there, going along with it, was surely in itself a kind of tacit encouragement. Martin herself sounds like her head was far enough in the clouds that she was impervious to any such social influences but I'm not sure about the other members.

      There's also the issue of whether it was unethical to publish the inner secrets of the group just two years after the event; they did disguise the names, but remember, this was all national news when it happened. It would have been easy to work out people's real identities with a bit of digging.

      Overall, I found the book's story fascinating; but I'm not sure I agree with the book.