Tuesday, January 31, 2012

Voodoo Neuroscience Revisited

Two years ago, neuroscientists were shaken by the appearance of a draft paper showing that half of the published work in a particular field had fallen prey to a major statistical error.

Originally called "Voodoo Correlations in Social Neuroscience", it ended up with the less snappy name of Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. I prefer the old title.

The error in question is now known variously as the "circular analysis problem", "non-independence problem" or "double-dipping" although I still call it the "voodoo problem". In a nutshell it arises whenever you take a large set of data, search for data points which are statistically significantly different from some baseline (null hypothesis), and then go on to perform further statistics only on those significant data points.

The problem is that when you picked out the statistically significant observations, you selected the data points that were especially "good", so if you then do some more analyses only on those data, you are almost guaranteed to find something "good". To avoid this you need to make sure that your second analysis is truly independent of your first one.

Anyway, Vul and Pashler, the main authors of the original voodoo article, have just written a short piece in NeuroImage offering some reflections on the paper and the aftermath. They don't make any major new arguments but it's a good read. Particularly fun is their explanation of what inspired them to look into the voodoo problem:
In early 2005 a speaker in our department reported that BOLD activity in a small region of the brain can account for the great majority of the variance in speed with which subjects walk out of the experiment several hours later (this finding was never published as far as we know). The implications of this result struck us as puzzling, to say the least: Are walking speeds really so reliable that most of their variability can be predicted? Does a focal cortical region determine walking speeds? Are walking speeds largely predetermined hours in advance? These implications all struck us as far-fetched...
But they reveal that it was one paper in particular that set them off voodoo-hunting
Our interest in probing the matter was further whetted by an episode occurring a short while later: Grill-Spector et al. (2006) reported that individual voxels in face selective regions have a variety of stable stimulus preferences; in a critical commentary, Baker et al. (2007) found that the analysis used to ascertain this fact implicitly built these conclusions into the method, such that the same analysis applied to noise data (voxels from the nasal cavity) revealed a similar variety of stable preferences. It occurred to us that a similar circularity might underlie the puzzlingly high correlations.

To their credit, Grill-Spector et al quickly accepted Baker et al's criticism and admitted that some of their original conclusions had been wrong.

ResearchBlogging.orgVul, E., and Pashler, H. (2012). Voodoo and circularity errors NeuroImage DOI: 10.1016/j.neuroimage.2012.01.027

Saturday, January 28, 2012

The Wriggling Brain

What do we mean when we talk about "the brain"?

Easy, right? It's this:

Certainly, this is the image that comes to my mind.

But this is not an image of a brain. It's an image of a dead brain.

In a living brain, all kinds of interesting things are happening. Things we literally can't begin to imagine. Because these are hard to visualize, they can't enter the mental picture.

To picture the living brain as just a yellowy lump is like picturing Wikipedia as a disc. It's accurate as far as it goes, but it misses the whole point. You could download Wikipedia onto a BluRay disc, and then you could describe that disc as "Wikipedia" and you wouldn't be wrong, but Wikipedia is much more than a silver circle.

It doesn't help much that we know that there's more to the living brain than a yellowy lump. Yes, most of us know that the living brain is somehow responsible for thought, feeling, perception, and consciousness.

But we have no idea of how it does so, we don't have any feel for this relationship. We agree with the idea that brain = mind, but that's just an abstract equation. Just as most of us know that e=mc2, but only physicists understand it.

All this leads to philosophical problems. Wittgenstein wrote:

Look at a stone and imagine it having sensations. - One says to oneself: How could one so much as get the idea of ascribing a sensation to a thing? One might as well ascribe it to a number! - And now look at a wriggling fly and at once these difficulties vanish and pain seems able to get a foothold here.
What he meant is that we only feel that we can ascribe pain (or any "internal" mental state or event) to something which is behaving "externally".

Now in most cases, that's fine. Most inanimate objects really don't have mental states. But brains do. The brain, we feel, is inanimate; it's just a yellowy lump. By itself the brain is like Wittgenstein's stone - it seems.

So, we feel, the brain itself can't really have mental states, only walking, talking, behaving people can, like wriggling flies. Except we know on an abstract level that brains do have mental states; so we tie ourselves into philosophical knots about "brains" and "persons", asking whether a person is more or less or the same as a brain, and so on.

The whole problem could be removed, I think, if instead of a yellowy lump, we could picture the living brain in all its active complexity; if we could talk about "the brain", not as an inanimate object, but as the most animate thing in the world.

In the brain there are hundreds of billions of cells, and each one is a hive of movement - not visible to the naked eye or even to a microscope, but the movement of ions and neurotransmitters and ultimately information.

I think many philosophical puzzles would lose their edge if we could somehow get a feel for all that; if we could replace the accurate, but misleading, yellowy lump picture of "the brain" with one that captures the complexity and dynamism of the thing: a city, a hive of insects, a vast machine.

Thursday, January 26, 2012

Lung Cancer and Drug Abuse

Lung cancer and drug abuse has something in common – smoking. The fact is, a lot of people are dying because of just one stick of cigarette fired up on a lazy afternoon or after having a sumptuous meal.

Smoking has been a bad habit evident in both men and women. The nicotine taken from a cigar is reason enough to gradually tear a person's respiratory center apart. Aside from smoking, there are other reasons for getting lung cancer which is: being exposed to certain chemicals and being an alcoholic. Heredity can also be factor but that wouldn't be out of drug abuse but within the genes.

Smoking – a silent killer

There are lots of claims from medical experts including doctors that says smoking is one of the main reasons why there is lung cancer on the first place. Through smoking, normal lung cells are damaged because of the chemical or drug induced in the pack.

Once a person inhales smoke from a cigarette, it releases carcinogens which are considered as the main contributory factor for the production of immature lung cancer cells. Once inside the lungs, these carcinogens change the composition of lung tissues into abnormal ones.

On the first stages your body can still have the capability to repair the slightly damaged tissues but when there is already an occurrence of repeated exposure to the drug or chemical, the normal cells will be damaged in increasing number. There will come a time when the damage made to the tissues and the organ itself can turn out confused and may eventually develop cancer.

Since lungs are made up of blood vessels and are surrounded by lymph nodes, cancer cells tend to easily metastasize to different parts of your body like the heart, adjacent lung, and so on. Because of this, aside from manifesting respiratory symptoms, one can also express signs made by other parts of the body.

Lung cancer is seldom diagnosed during its early stages therefore it is sometimes too late to reverse the damage made by the disease and appropriate measures should be made in order to lengthen the life of the person or otherwise eliminate the growth of the cells.

Drug abuse leading to cancer

There are a lot of reasons why people tend to abuse the use of drugs. First, because of familial tendency, a person may have more chances of acquiring the disease rather than preventing it. Being addicted to drugs can come from their own family members. There are claims that show that genes of addicted persons may carry drug addicted factors resulting to substance abuse and dependence.

Peer pressure may also be one of the main reasons why people abuse the use of drugs. When you have chosen a bunch of friends who are always living the life of a celebrity – drinking, smoking, and wasting money, you have a 90 percent chance of getting into their lifestyle.

Personality determines the options that you have to choose. When you are in an environment that contributes more negative energy than a positive one, then you are bound to put it all on the misuse of drugs just to drive you away from the depression of the world.

The moment one gives into drug abuse, there are bigger chances of having the disease. Therefore, lung cancer and drug abuse are interrelated.

Take Your Placebos, Or Die

People who take their medication as directed are less likely to die - even when that "medication" is just a sugar pill.

This is the surprising finding of a paper just published, Adherence to placebo and mortality in the Beta Blocker Evaluation of Survival Trial (BEST)

BEST was a clinical trial of beta blockers, drugs used in certain kinds of heart disease. The patients were aged about 60 and they all suffered from heart failure. Everyone was randomly assigned to get a beta blocker or placebo, then followed up for 3 years to see how they did.

Here's the big finding: in the placebo group of 1174 patients, the people who took all of their placebo pills on time (the good adherers), were significantly less likely to die than the patients who missed lots of doses. People who took over 75% as directed were 40% less likely to die than those with less than 75% adherence:

That's pretty interesting. The pills were placebos - they can't have had any benefit. So what's going on?

It gets even better. You might be tempted to write off these results as obvious: "Clearly, people who follow the study instructions are just 'healthy' people in other ways - maybe they take more exercise, eat better, etc. and that's what protects them."

Certainly, that's what I'd have said.

But what's remarkable is that when the authors corrected the statistics for all the confounding variables they measured - including things like age, gender, ethnicity, smoking, body mass index and blood pressure - it barely changed the effect. Some of the factors did correlate with adherence, but not in a way that it could explain the adherence effect on mortality.

This isn't the first study to find this effect. The authors themselves have already reported it, as have other researchers going back decades (many of which also tried, and failed, to explain it through confounding factors.) They say that it's unlikely to be a case of publication bias.

So what we have is a large effect, which cannot be causal, yet which can't be explained by any obvious confounds. Logically then, it must be the result of a confound (or more than one) that aren't obvious.

This is an important lesson. It's common for someone to do a study and find an interesting / scary / controversial correlation between two things. Often one is some kind of lifestyle factor, diet, environmental exposure, or whatever, and the other is some nasty disease. "And it wasn't explained by confounds!", such studies often conclude.

What the placebo adherence effect demonstrates is that there may be confounds no-one has thought of. They might even be impossible to measure. And if these mystery confounds can literally kill you, they can probably cause all kinds of other effects too.

In other words this illustrates the truism that correlation is not causation - not even when you're really sure it is...

ResearchBlogging.orgPressman, A., Avins, A., Neuhaus, J., Ackerson, L., and Rudd, P. (2012). Adherence to placebo and mortality in the Beta Blocker Evaluation of Survival Trial (BEST) Contemporary Clinical Trials DOI: 10.1016/j.cct.2011.12.003

Wednesday, January 25, 2012

The Hidden Face Within

One of these two images contains a hidden picture of a face. Which one?

This was the question faced by participants in a remarkable psychology experiment just published, Measuring Internal Representations from Behavioral and Brain Data.

Five healthy volunteers were presented with a series of random black and white grid patterns. Each grid square was either black or white, and this was randomly determined on each trial.

There was no pattern to the images, they were completely random. But the subjects were told that half of the patterns contained a hidden face, and that their job was to work out which ones did. Each subject saw over 10,000 random images and they took about 1 second to judge each one.

The volunteers "detected" a face in 44% of the images. Somehow, all five of them convinced themselves that they were seeing faces in many of the grids. The authors say that
Upon completion of the experiment we debriefed observers, and all expressed shock that no face was ever presented.
That's strange enough in itself, but here's the really clever bit. The authors compared the patterns which were declared to contain a face, to the ones that were reported as empty. The image below shows the average "face" grid, minus the average "non face" grid, for each individual subject:

As you can see, this reveals...a face! Kind of. The top half shows the raw average; the bottom half shows the statistically significant differences from random noise.

In Subjects 1 and 2, the face is pretty clear, with eyes, a nose and a mouth. For 3 and 4, it's less coherent, but you might be able to see it if you look hard enough. For Subject 5, not really.

What this means is that people (at least, most of them) were not just seeing faces in any noise. They tended to see faces when the random patterns happened to resemble a kind of primitive face, but it was a different face for each person. The authors say that these strange faces correspond to the individual's internal representations, or models, of "a face", that each subject was "seeing" in the noise.

Finally, the whole experiment was conducted while EEG data was being recorded from the participant's brains. The EEG results revealed that there was a clear difference in the neural activity associated with "face" compared to "nonface" stimuli - except in Subject 5, who you'll remember had the least coherent "internal face".

What's exciting about this approach is that it investigates perception in a purely "top down" way. Normally, when we look at anything, what we end up perceiving is a product of "bottom up" influences - the raw data - and "top down" ones - what we expect to see. In this experiment, there was no real "bottom up" data; it was all "top down".

This is a form of pareidolia - perceiving familiar things in random stimuli. Seeing the face of Jesus in your sock, that kind of thing. It works for sounds too: in the famous White Christmas Experiment, people report "hearing" music in pure white noise - when told to expect it. Real-life examples of this include the "Islam Is The Light" doll, and my personal favorite, the singing paedophile Christmas mouse.

Finally, I wonder what embodied cognition theorists make of this paper. Because this paper claims to be "Measuring Internal Representations from Behavioral and Brain Data"; embodied cognition (at least the radical kind) is the theory that "internal representations" either don't exist, or at least don't explain anything about human cognition.

ResearchBlogging.orgSmith, M., Gosselin, F., and Schyns, P. (2012). Measuring Internal Representations from Behavioral and Brain Data Current Biology DOI: 10.1016/j.cub.2011.11.061

Saturday, January 21, 2012

The Trojan Horses of Medicine

Dodgy science is being smuggled into medical journals thanks to a loophole in the regulations, say Italian psychiatrists Barbui and Cipriani in an important article.

They focus on agomelatine, a recently-approved antidepressant. But their point applies to all of medicine, not just psychiatry.

Here's the problem. Nowadays, major medical journals have rules governing systematic reviews and meta-analyses of clinical trial data. If you want to review the evidence about how well a certain drug works, or its safety, you've got to do it properly. You have to consider all of the data, not just focus on the results that suit you. And so on.

However, these rules don't apply to "narrative" review papers, which is a broad term meaning any kind of article meant to give a discussion of the pharmacology, history, chemistry etc. behind a particular drug. For a narrative review, there are no rules.

In particular, you can write about the clinical trial data in such articles with no restrictions. Unlike in a proper systematic review, you can cherry-pick trials and so on to your heart's content. Some narrative reviews have so much clinical data in them that they end up being, in effect, a bad systematic review. One that would never have been deemed acceptable as a systematic review.

Barbui and Cipriani argue that narrative reviews are often used in this way, namely to paint drugs in a positive light. In the case of agomelatine, they mention a number of recent narrative reviews which were supposedly about the drug's mechanism of action, but which actually contained extensive (but biased) reviews of the clinical trial data.

It's not hard to see how pharmaceutical companies might take advantage of this process.

However, the problem is surely not limited to agomelatine. It's a loophole that affects every branch of medicine:
Most medical journals require adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). It is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. Adherence to PRISMA is not required in review articles dealing with basic science issues as these articles are not focused on clinical trials.

In practice, however, the agomelatine case indicates that clinical data are regularly included and reviewed with no reference to the rigorous requirements of the PRISMA approach. These articles have this way became a modern Trojan horse for reintroducing the brave old world of narrative-based medicine into medical journals.
How do we stop this? It's simple, the authors say: just make all references to clinical data subject to PRISMA, or other accepted regulations, whatever the supposed 'primary focus' of the paper:
We argue that medical journals should urgently apply this higher standard of reporting, which is already available, easy to implement and inexpensive, to any form of clinical data presentation.
Of course, there are plenty of good narrative reviews that really do cover the pharmacology or other science in a useful way. The problem is not narrative reviews as such, but the way they're used.

ResearchBlogging.orgBarbui, C., and Cipriani, A. (2012). Agomelatine and the brave old world of narrative-based medicine Evidence-Based Mental Health, 15 (1), 2-3 DOI: 10.1136/ebmh.2011.100485

Friday, January 20, 2012

The Age (Cohort) of Autism

New data shed light on the recent mysterious rise in the number of kids being diagnosed with autism.

The new research doesn't explain the increase, but it tells us more about it. It shows that the rise in Californian autism diagnoses (reported to the state DDS) over the period 1996 to 2005 was a cohort effect, meaning that the rates of diagnosis have got higher, the later a child was born.

A child who's 10 today (born 2002) has double of the chance of having a recorded diagnosis compared to a 14-year-old born just four years earlier, in 1998.

"That doesn't tell us anything new!" you might object (I did at first). "All that means is that rates have risen, and we knew that already". But actually it does tell us something important. Because the data could have turned out differently; rates could have risen without a cohort effect, if, in recent years, lots of diagnoses were being handed to children regardless of their age.

That didn't happen. Almost all children in California who get a diagnosis, get it at age 3 or 4. In more recent years, the average age at diagnosis actually fell slightly. The peak used to be age 4, it's now 3.

So it's not that children in general have been getting diagnosed with autism more. It's that young children are getting diagnosed more; children aren't being diagnosed "retrospectively", as it were.

Another interesting finding is that the rise in rates of 'high-functioning' autism has been much bigger than the rise in low-functioning autism (i.e. autism alongside intellectual disability), although that has risen as well. Edit: but note that their defintion of 'functioning' is rather unique; see the comments.

So what does this mean?

These data are consistent with various interpretations. It could be that rates of autism have really risen in California over this time period. But it could also be that people are getting more likely to detect and diagnose it - in young children.

ResearchBlogging.orgKeyes, K., Susser, E., Cheslack-Postava, K., Fountain, C., Liu, K., and Bearman, P. (2011). Cohort effects explain the increase in autism diagnosis among children born from 1992 to 2003 in California International Journal of Epidemiology DOI: 10.1093/ije/dyr193

Thursday, January 19, 2012

Challenging the Antidepressant Severity Dogma?

Regular readers will be familiar with the idea that "antidepressants only work in severe depression".

A number of recent studies have shown this. I've noted some important questions over how we ought to define "severe" in this context, and see the comments here for some other caveats, but I'm not aware of any studies that directly contradict this idea.

Until now. A new paper has just come out which seeks to challenge this dogma - not the author's term, but I think it's fair to say that the severity theory is becoming a dogma, even if it's an evidence-based one (but then, all dogmas start out seeming reasonable).

However, while the new paper is interesting, I think the dogma survives intact.

The authors went through the archives of all of the trials of antidepressants for depressive disorders conducted at the famous New York State Psychiatric Institute for the past 30 years. They excluded any patients who were severely depressed, and just looked at the milder cases. The drugs were mostly the older tricyclic antidepressants.

With a mean HAMD17 score of about 14, the patients they looked at were certainly mild. By comparison, most trials today have a mean of well over 20, and according to the main studies supporting the severity dogma, you need a score of about 25ish to benefit substantially:

So what happened? They reanalyzed 6 trials with over 800 patients. Overall there was a highly significant effect of antidepressants over placebo in mild depression, with an effect size d=0.52, or about 3.5 HAMD points. This is actually better than most other studies have found in "severe" depression. If valid, these results would torpedo the severity theory.

This seems very interesting... but. There's a big but (I cannot lie). Although the authors say they wanted to include all the relevant trials from the NYSPI, they only had access to the data from 6. There were another 6 projects, but they were "pharmaceutical company studies from which data were not released to the investigators."

This pretty much wrecks the whole deal. If those 6 studies all found no benefit of the drug, the overall average results would be much less impressive. We have no way of knowing what those studies found, but I'd wager that most of them were negative, because of publication bias - we know that drug companies tend to publish positive studies and bury negative ones. Or at least they did, at the time these studies took place (there are better regulations now).

By contrast, severity dogma classic Kirsch et al (2008) avoided publication bias by looking at unpublished data. Fournier et al (2010), the other major severity study, didn't but the data were very similar to Kirsch et al so it's not hard to believe them.

So in my view, until we know what happened in the other 6 trials, we can't really interpret these results, and the severity theory stands.

ResearchBlogging.orgStewart, J., Deliyannides, D., Hellerstein, D., McGrath, P., and Stewart, J. (2011). Can People With Nonsevere Major Depression Benefit From Antidepressant Medication? The Journal of Clinical Psychiatry DOI: 10.4088/JCP.10m06760

Wednesday, January 18, 2012

Neuroskeptic In The Papers

Two more academic papers have appeared that refer to this blog:

The Openness of Illusions is a philosophy piece about the epistemological implications of optical illusions. It cites my post about a paper dealing with the spooky Hollow Face Illusion. Long-time readers will remember this, but most of you probably won't, so here it is again; it truly is weird:

In my view, an even better demonstration of the same effect is the incredible magic dragon:

You can make your own dragon by printing it out from this helpful page. It takes like 5 minutes to make and it'll provide hours of philosophical fun.

Meanwhile, Stereotypes and stereotyping: What's the brain got to do with it? takes a neuro-skeptical look at the psychology and neuroscience of prejudice. It flatters me with a mention:
It should go without saying that activity in the brain does not indicate in any way whether a mental act is hard-wired (Beck, 2010). It is equally absurd to argue that the amygdala is on anyone’s team or feels occasionally upset. Alas, non-experts should not be expected to spot such fundamental flaws in reasoning without help.
Thus scientists using neuroscientific methods to study phenomena of social relevance are not only expected to be particularly critical towards over-interpreting their own findings, but also to monitor the ways in which their and other researchers’ data are reported in the media. Courageous attempts to counter overblown neuroscience-based claims in non-scientific outlets have so far resulted in numerous critical blogs (e.g., http://www.talkingbrains.org; http://neuroskeptic.blogspot.com) as well as in the publication of counterstatements in popular magazines (e.g., Aron et al., 2007).

Monday, January 16, 2012

Rewarding drug rehab facilities

The primary objective of all Drug rehab centers is to enable you with the power to fight away the dreaded scourge of the century, Drug. The end result is that you should clinch into a super win after defeating the monster and go back to sober living. It is certainly unfortunate that some money hungry businessmen have entered into this rehabilitation world without any focus on social benevolence. That’s the sole reason for which, the addict does not get the right output even after spending massively.

The Luxury Drug Rehab has emerged as the super power in the realm of Drug rehabilitation. They are geared up to extend support through various ways including finding the right drug rehabilitation center exclusively for you.

When your objective is to bid adieu to a disastrous drug addiction, you should never take any risk.

In fact, the objective of Luxury Drug Rehab perfectly blends with your ultimate motto, which is to get rid of the drug and that too for good. It is quiet simple. Find out the best suited Drug rehabilitation facilities from them and choose the one that suits you the most. They can even recommend the right one. Remember, they also recommend you the right drug treatment program, without which you can never ensure a proper recovery. So, why to wait any further? Perhaps, you have found many close associates who turn it into mockery. Now it is your turn to force them to totter.

About the Author:

This article is written by Dr.Naina

Friday, January 13, 2012

Dolphins who Dream of Whales

Once in a while you come across a paper that can only be described as lovely. This is one: Do dolphins rehearse show-stimuli when at rest?

Five dolphins lived in a certain aquarium in France. Every day, they put on shows for people - jumping around, that kind of thing. One day the aquarium started playing a 20-minute clip of "intro music" for the show. This consisted of various oceanic sounds including sea birds, dolphin noises and some whale-song.

What happened next was amazing. About a month about they brought in the intro sounds, the researchers noticed some odd sounds coming from the dolphins, late at night. It turned out that the dolphins had started making whale noises.

They only did this at night, mostly between 1 am and 3 am, when they were resting, possibly even sleeping. No-one trained them to do this. The "atypical vocalizations" were much lower than the dolphin's normal whistles, and also lasted longer.

Unfortunately, it wasn't possible to tell how many of the dolphins did this.

The authors recorded the dolphin's whale impressions with an underwater microphone, and played them back to a sample of 20 biologists, who weren't told the hypothesis of the study. Many of them thought they were whale-song, especially when the clips were slowed down to half-speed, dolphin's voices being "higher" than whales'.

Why the dolphins did this is a mystery. All of them had been born in captivity, so they'd never encountered a real whale. One theory is that they were mentally rehearsing the events of the day to come. Maybe they were even dreaming about them and "talking in their sleep" - although this is unclear, because it's not known whether dolphins dream; don't exactly sleep in the same way we do.

The paper's open access and it even comes with some audio clips of the dolphins, although unless you're familiar with what they sound like normally these aren't very meaningful.

ResearchBlogging.orgKremers D, Jaramillo MB, Böye M, Lemasson A, & Hausberger M (2011). Do dolphins rehearse show-stimuli when at rest? Delayed matching of auditory memory. Frontiers in Psychology, 2 PMID: 22232611

Wednesday, January 11, 2012

Do Brain Scans Sway Juries?

Does seeing a criminal's brain affect jury decisions?

Edith Greene and Brian Cahill ask this question in a new study which put volunteers in the position of jurors in a murder trial. The 'defendant' was guilty, but the question was: should they get life in prison, or death?

It turned out that seeing brain scans didn't have much of an effect - but it's not clear how far the results would generalize.

208 mock-jurors were randomly assigned to get different kinds of mitigation information about the accused. Sometimes, all they were told was that he had been diagnosed with schizophrenia, depression and a substance misuse disorder. Others were also given neuropsychological test scores showing that he did poorly on various tests of reasoning and cognition. Finally, some were shown brain scans on top of all that, scans which were described as showing left frontal lobe damage.

All these materials were based on a real 2007 court case.

What happened? When the defendent was said to have been assessed as probably "dangerous" in future, people who were only told his diagnosis of schizophrenia usually sent him to the chair. But when they were given his psychological test scores - showing that he suffered from cognitive impairments - they were far more lenient. Seeing the neuroimages had no effect on top of that.

If the guy was described as posing a low risk of future violence, the verdicts were lenient, no matter what else they were told about him. In the real case, by the way, he got life.

This suggests that brain scans don't exert a seductive allure on jury decisions, at least not over-and-above psych test scores. But I'm not sure how representative the results are. The 'jurors' were all psychology undergrads. Most were Hispanic (63%) females (67%). Are psychology students especially resistant to the allure of brain scans - and/or especially vulnerable to the allure of psychological test scores? No-one knows, but it's surely plausible.

On some level, neuroimaging evidence clearly can influence people's decisions, like any other evidence; lawyers wouldn't bother presenting it otherwise. The question is how much of an impact it has, but that is surely going to depend on the details of the case as well as the juror's background; I'm not sure how much a study like this one, focussing on one example, will be able to tell us.

ResearchBlogging.orgGreene E, and Cahill BS (2011). Effects of Neuroimaging Evidence on Mock Juror Decision Making. Behavioral Sciences and the Law PMID: 22213023

Tuesday, January 10, 2012

The Plight of Psychoanalysis?

A New York psychoanalyst reveals her concerns about the profession in A Letter to Freud: On the Plight of Psychoanalysis

Dinah M. Mendes's letter covers several topics, but I was struck by the sections that deal with the contemporary challenges facing American analysts. She paints a rather sad picture of analysts who spend years in training, only to find a shortage of people out there who want their treatment:
At psychoanalytic training institutes it is often difficult for candidates to secure control or training cases—prospective analysands who sign on with analysts-in-training, usually at a low rate (sometimes as low as $10 a session). Here the issue is not the cost of the analysis but the low valuation of the opportunity offered—what might be regarded as the gift of self-knowledge.

The gratifications of instantaneous communication—texting, Facebook, and blogging—are immediate and obvious and erode the value of the slow and arduous route to communication and understanding offered by psychoanalysis. We seem to be transfixed in our culture by the allure of performance and public presentation, and a climate in which the exterior signifies the interior, where what you see and hear is what is true and real (no matter how often this fantasy is belied) is not receptive to the ideals of psychoanalysis.
She goes on to examine the increasing popularity of psychodynamic psychotherapy, approaches which draws on Freud's ideas but is much shorter (and hence cheaper) than classical psychoanalysis which involves hourly sessions, three times per week, over a period of years -
To judge from the mushrooming of new institutes of psychotherapy and shorter training programs within established psychoanalytic institutes, many people are interested in becoming psychotherapists, while there are fewer candidates for traditional psychoanalytic training and for psychoanalysis as a treatment choice.

For those who elect full-scale psychoanalytic training, the supply of certified psychoanalysts exceeds the demand in the population, and as psychotherapists they compete with psychotherapists of all stripes and denominations. The analytic institute can feel like a sequestered haven in which psychoanalysis is an “in house” specialty, tendered by training analysts (who have to earn their institutional stripes) to analytic candidates...

In my years of training, the contemporary challenges facing the would-be practitioner of psychoanalysis were rarely if ever openly addressed, although many recent graduates find themselves with few and sometimes no analytic cases...
All this, she says, can be seen in the context of
A zeitgeist in which the intrinsic and often intangible value of knowledge and education, and of self-knowledge and self-examination, has been supplanted by the appeal of material and pragmatic goals.
Of course this is all anecdotal. I wonder if any analysts amongst my readers have thoughts on this?

ResearchBlogging.orgMendes, D. (2011). Letter to Freud: On the Plight of Psychoanalysis The Psychoanalytic Review, 98 (6), 755-774 DOI: 10.1521/prev.2011.98.6.755

Monday, January 9, 2012

Men and Women - Alien Personalities?

How different are men and women? Are they from two different planets?

In the cleverly-titled The Distance Between Mars and Venus, the authors argue that personality-wise, the differences between men and women have been underestimated by previous studies because they used simplistic statistics.

Traditional studies of gender and personality have given some men and some women a personality quiz, and calculated the average male and female scores on the different aspects of personality.

When you do this you find that there are differences, but that the standardized effect sizes are fairly small, which means that there is a lot of overlap. Even on measures where men score above women on average, lots of men score below the female average, and vice versa, like this:

Traditional studies of overall gender differences have looked to see the differences between the average man and woman on each personality aspect, and then averaged the differences on each scale to get an "overall difference" score. Which comes out as fairly small.

The authors of the new paper say that this approach fails to capture the true difference and they give a helpful analogy of why:
Consider two fictional towns, Lowtown and Hightown. The distance between the two towns can be measured on three (orthogonal) dimensions: longitude, latitude, and altitude. Hightown is 3,000 feet higher than Lowtown, and they are located 3 miles apart in the north-south direction and 3 miles apart east-west.

What is the overall distance between Hightown and Lowtown? The average of the three measures is 2.2 miles, but it is easy to see that this is the wrong answer. The actual distance is the Euclidean distance, i.e. 4.3 miles – almost twice the "average" value.
The main novel argument of this paper is that if you calculate the distance (technically the Mahalanobis distance) in 'personality space' between men and women then you get a larger value than if you just average the differences on each measure.

The paper also uses a couple of other methods that increase the effect sizes, namely using 15 different personality measures instead of the more common Big 5, and adjusting the differences upwards to take account of the fact that quizzes only imperfectly measure underlying 'latent' personality traits.

I don't want to get into the debate over how valid the underlying data are (a 1993 sample of over 10,000 American adults, used to standardize the 16PF questionnaire). There are lots of technical comments here. I'm going to focus on the distance method.

It's a very interesting approach and certainly raises questions about merits of the old approach, which when you think about it, does seem a bit crude. But I'm not sure that the average person is talking about distance in a hypothetical space when they talk about "personality differences".

As an analogy, consider the dog breeds Labrador and Golden Retriever. These are regarded as being pretty similar kinds of dog. On any given feature, the average differences are small, at least compared to the diversity of other breeds. They're roughly the same size, much the same build, coat type etc.

They are distinct breeds. This surely means that when you take all of the differences together, they define distinct regions of "dog space" (which has dozens or hundreds of dimensions), with little or no overlap.

Yet they are still regarded as similar. "Similar" and "distinct" are not mutually exclusive. In fact, isn't the definition of 'distinct yet similar' that two things separate in some kind of feature-space, but don't differ much on any one measure?

So I would say that these data show that, while men and women may be distinguishable in personality, they could still be similar. This is something of a semantic point but not "merely" semantic: it changes the interpretation of the numbers.

J. S. Hyde, who is most associated with the view that gender differences are small, makes a similar (or do I mean distinct?) point in her comment on the paper:
The gender difference found is along a dimension in multivariate space that is a linear combination of the original variables transformed into latent variables...[but] the resulting dimension here is uninterpretible. What does it mean to say that there are large gender differences on this undefined dimension in 15-dimensional space created from latent variables? The authors call it global personality, but what does that mean?
Her questioning of what the direction along which men and women differ means, is (I think) the same question I'm asking about whether it disproves the idea of "similarity", in the ordinary sense of the term.

Finally, take a step back and the whole debate seems a bit circular because, by definition, "personality" means "things that differ between individual people". Things we are have in common aren't even in the picture. Two groups could differ in personality space but still be very close in the much larger space of "possible creatures". There's no personality trait for 'being human'.

ResearchBlogging.orgDel Giudice, M., Booth, T., and Irwing, P. (2012). The Distance Between Mars and Venus: Measuring Global Sex Differences in Personality PLoS ONE, 7 (1) DOI: 10.1371/journal.pone.0029265

Sunday, January 8, 2012

Dominant drug rehab centers

The addict and his family become happy if the drug rehab center can show a triumphant yield. When the Drug treatment centers can exhibit a dominant performance, everybody becomes happy as they earn their dream recovery. The addict and his unfortunate family members start praising about the center notwithstanding the true impact of the treatment.

In some cases, it has been found that the drug addict has gone back again to the dark valley of addiction even after undergoing Drug rehab treatment. Undoubtedly, it is the inspiration and mental strength of the addict that come in fore, but the contribution of the drug rehab center can never be denied. That’s why, experts of Luxury Drug Rehab have been constantly emphasizing on permanent recovery. When the drug rehab center controls the entire progress, the chances of permanent recovery increase.

The scientifically devised programs with the dollops of psychic therapies can really recover you for good.

Similarly, people shower accolades on Alcohol rehab programs that help the binge drinker to lead a life without chemical dependence. When the situation becomes more complicated, the treatment centers can show extraordinary caliber. In fact, some of the centers have been found to exhibit excellent performance under pressure. It is necessary to make the program adoptable to the addicts. Whether the center wants to arrange some outings or horse riding, it hardly matters. The most important factor is that the drug or alcohol addict should never intend to use the drugs in the lifetime. Luxury Drug Rehab boasts with plenty of such centers that have worked magically for many unfortunate addicts.

About the Author:

This article is written by Dr.Naina

Saturday, January 7, 2012

The Real Story On That "Antidepressant Surge"

Remember last week's story about how depression rates are soaring in Britain? It was all triggered by "new data" about an increase in antidepressant prescriptions.

At the time I was skeptical, not least because the data wasn't actually new, but I've done a bit more digging and it turns out the media coverage was even more misleading than I thought.

Here's some pretty graphs from the NHS Information Centre. I reiterate that all of these are freely available and have been for ages. Here's the one for antidepressants:

They've been rising strongly! In total prescription rates are about 60% higher now compared to in 2006. Oh dear.

What the papers didn't tell you is that pretty much every other class of drug has also increased over that period, by even more in some cases. Here's ADHD drugs, and dementia pills, which have increased by about 75% and 100% respectively:

There have also been steady increases in anticonvulsants and a 40% increase in meds for Parkinson's disease. All of the graphs are here.

So this suggests that there's been a general increase in prescriptions for brain drugs. But in fact it's even wider than that because if we look at the same data for cardiovascular system drugs, we find the same picture for most (although not all) kinds of these medications.

And for painkillers, we find over 50% increases in prescriptions of the stronger opioid drugs, a 20% increase in migraine drugs etc etc. I swear I'm not just copying and pasting the same graph.

Now clearly, all of these increased prescriptions don't mean that there are simultaneous explosions in rates of dementia, heart disease, pain, migraine, Parkinson's and ADHD, all in the past 5 years. We would have noticed if that were the case.

What's happened, clearly, is that doctors are just writing more prescriptions nowadays.

So it's misleading to say that there's been a spike in antidepressant prescriptions. Yes it's technically true but it ignores the context. The truth is that we seem to be experiencing a cultural shift in our relationship to medications - perhaps evidence of the creeping medicalization of life (although there are more prosaic explanations that need to be ruled out before we conclude that; this could be a bureaucratic change in the way prescriptions are counted.)

However, "Escalating Depression Crisis - Antidepressant Use Soars" is a better headline than "Possible Medicalization Gradually Continues For Sixth Year In Row".

The truth, sadly, has an inherent disadvantage in the battle for news coverage. If we find the truth boring, it's easy for someone to come along and make up something attention grabbing. But the only easy way to make the truth more interesting is to make it, well, less true.

Friday, January 6, 2012

Do You Have Free Will?

I mean you, specifically.

I'm not asking whether people have free will. I think they do - except you. You're the one person on earth who doesn't have it.

If you disagree - how would you convince me that you do have it?

Alternatively, if you're one of the people who doesn't believe in free will - I agree with you, people don't have it... except me. I'm special.


This might seem like one of those thought experiments that only philosophers could care about, but it's of more everyday importance than the general question of 'whether we have free will'. That's an interesting debate, but really it doesn't change anything. If we have it, we always have, and if we don't, we never will. Either way, here we are, and we'd better get on with our lives.

On the other hand, the question of whether an individual has free will has real consequences. It can even be a matter of life or death. It comes up in court cases. Lawyers and psychiatrists don't use the words "free will", they'll talk about responsibility or capacity or sound minds, but what they mean, in essence, is what the rest of us mean by the term free will.

In some of these cases, free will is what you want - if, say, psychiatrists want to confine you to a hospital, and you say you don't want to be there. Other times it's the reverse - if you've committed a crime, and your defense is that some kind of mental or neurological illness made you do it, then you're arguing that you don't have it (or didn't, at the crucial moment.)

But while lots of people have opinions on the abstract "Free Will" question, I don't think many people pay attention to the issue of their own free will or lack of it - until they end up in court. We just assume that if everyone else has it, so do we, and vice versa.

Yet how can we be so sure? Everyone accepts that people differ in regards to how much free will they have. Wwhen people say "I believe people have free will", they don't really mean all people - they surely make an exception for babies, people in a coma, people having a seizure, and probably children, people with dementia, people with severe mental illness... Likewise, people who don't believe in free will recognize that there's a difference between a normal adult and one of those people.

But where do we draw the line, and how do you know which side you're on? This seems to me the most important questions to be asking about free will.

Wednesday, January 4, 2012

Hot Sex Prevents Breast Cancer

Breast cancer is caused by sexual frustration. Women should ditch their unsexy husbands and find a real man to satisfy them if they want to reduce the risk of the disease.

That's according to An Essay on Sexual Frustration as the Cause of Breast Cancer in Women: How Correlations and Cultural Blind Spots Conceal Causal Effects, a piece that was published today in The Breast Journal.

Wtf. Really -
Endocrinological processes are important targets in breast cancer research. These processes are also important in human sexual behaviors. I hypothesize that these processes are capable of adjusting or distorting biological active forms of specific sex hormones depending on experienced sexual stimuli. These aberrantly metabolized sex hormones will ultimately lead to breast cancer.

...My thesis is that breast cancer is essentially caused by sexual frustration. The focus of this hypothesis is aimed at the (un)consciously experienced tension and sexual dissatisfaction between the chosen mate based on socio-economic, intellectual, ethnic or cultural motives and the nonchosen potential mate who has more appealing sexual incentive properties.

In most western societies the improved economic independence of women has not changed to such a degree that long-term partners are chosen entirely according to sexual incentive properties. If the selected partner has no or weak sexual incentive properties for the other member of the couple, it is likely that sexual frustration will follow in the long run (6), which ultimately will cause breast cancer in some women...

...higher socio-economic group of women pay more than average attention to the assets or status of the potential partner(7)....The chances of some women from higher socio-economic classes to find a sexually compatible mate are considerably reduced. This is due to an often self-imposed very limited range of potential partners. In this group of women, high status of the potential partner compensates for the acceptance of physically less attractive men (9)...

...These women have a disadvantage because they have a smaller pool to choose from if they want a man they will not tower over. This increases the chances to settle for a sexually incompatible partner...
There are 15 references, but they're all about sex, not cancer. Thus we get a citation to support the statement that "If the selected partner has no or weak sexual incentive properties for the other member of the couple, it is likely that sexual frustration will follow in the long run (6)", but not for the rather more controversial idea that disappointment in the bedroom somehow leads to malignant mutations in the DNA of cells of the mammary epithelium.

Well, except the line that "aberrantly metabolized sex hormones" are responsible, which is the scientific equivalent of waving your hands and saying "woo".

How did this happen?

The Breast Journal, so far as I can see, publishes lots of sensible research. It may not be a major journal but it's MEDLINE indexed and ranked 143/184 for impact in the field of oncology, which means there are 40 cancer journals in the world that have less impact than it.

If I had published there, I'd be a bit miffed that my work was appearing in the same pages. Thankfully I haven't but as a scientist I'm still insulted that this has been published in a scientific journal, and will be appearing on the shelves of libraries around the world under the heading "science".

ResearchBlogging.orgStuger, J. (2011). An Essay on Sexual Frustration as the Cause of Breast Cancer in Women: How Correlations and Cultural Blind Spots Conceal Causal Effects The Breast Journal DOI: 10.1111/j.1524-4741.2011.01206.x

Tuesday, January 3, 2012

Antidepressants: Bad Drugs... Or Bad Patients?

Why is it that modern trials of antidepressant drugs increasingly show no benefit of the drugs over placebo? This is the question asked by Cornell psychiatrists Brody et al in an American Journal of Psychiatry opinion piece.

They suggest that maybe it's the patients fault:
Participation that is induced by cash payments may lead subjects to exaggerate their symptoms [i.e. in order to get included into the trial]... Another contributing factor to high placebo response rates may be the extent to which the volunteers in antidepressant trials are really generalizable to patients in clinical practice.
Since the initial antidepressant trials in the 1960s, participants have gone from being patients who were recruited primarily from inpatient psychiatric populations to outpatient volunteers who are often recruited by advertisements. At times, these symptomatic volunteers have participated in other trials. When we contact potential participants to schedule screening, they often ask to be reminded which trial we are screening for or mistake our research trial for a different protocol in which they recently participated.
They then recount the tale of two "professional subjects" who claimed to be depressed and enrolled in two antidepressant trials simultaneously, without telling the researchers; it only came to light when someone involved in both studies spotted the duplicate names.

I've been the victim of such nonsense myself, as have many of colleagues - it's a perennial watercooler topic. A few years ago I was running a study recruiting people who'd recovered from psychiatric illness. The main source of volunteers was online adverts.

That study was a learning experience. What I learned is that House was right. We recruited about 20 people. No fewer than 3 turned out to have enrolled in other studies and lied about it. After I realized this I Googled the offender's names and two of them turned up in the court pages of the local newspaper pleading guilty to various petty crimes.

Another volunteer was left handed and, upon realizing that I was only recruiting right-handed people, discretely switched his pen to his right hand and then took 5 minutes trying to fill out a form with his off hand. He didn't make it in, but if I hadn't been paying attention he would have.

So yes, it is a problem. However, it would have to be taking place on a massive scale for it to be having a significant effect on antidepressant trial results and this really seems pretty unlikely.

In my view, the authors miss out on the real problem with recruiting depressed people through adverts:  depressed people don't tend to respond to adverts, because depressed people don't do anything. That's why they call it depression.

Getting recruited into a modern clinical trial is actually quite a challenge. There are many pieces of paper to fill in, calls to return, appointments to attend. Turn up late to the screening visit, or otherwise make life difficult for the study staff, and you'll be marked down as "unreliable" and they'll find someone who plays by the rules. Modern trials are very expensive. The last thing a study sponsor wants is a volunteer who will end up forgetting to take their pills on time.

Depression, unfortunately, makes you bad at doing things. You procrastinate, you forget, you put things off until too late, you have a change of heart and decide not to, you get cold feet, you can't be bothered... That goes for things as simple as cooking dinner in severe cases, let alone something as complicated as taking part in a trial.

So while you wouldn't go looking for aquaphobic people in a swimming pool, I'm not sure we should be looking for depressed people through adverts.

ResearchBlogging.orgBrody B, Leon AC, and Kocsis JH (2011). Antidepressant clinical trials and subject recruitment: just who are symptomatic volunteers? The American journal of psychiatry, 168 (12), 1245-7 PMID: 22193668