Saturday, March 31, 2012

DSM-5: A Little Mix Up

Proposals in the upcoming DSM-5 psychiatric manual for diagnosing "mixed" mood states may be muddled, according to a new paper.


The mixed state - the name alluding to a mix between depression and mania - has traditionally been viewed (more or less) as combining the dysphoria of depression with the energy of mania. Anger, agitation, restlessness and so forth.

I've been depressed and I know only too well the difference between that "active" depression and the "inactive" kind; if I had to choose, I'd always go for the latter, because at least you're in less danger of doing or saying something you later regret.

However, in the proposals for DSM-5, "mixed" episodes as such will be abolished. Instead, a depressive episode will have "mixed features" if it is associated with at least 3 of 7 symptoms normally seen in (hypo)mania. But - and here's the key novelty - those 7 are only the "good" symptoms of mania. Not things like anger, irritability, insomnia or 'aimless' hyperactivity. (Edit: There are also separate criteria for "mixed" manic and hypomanic episodes).

What will this mean? In a new paper, psychiatrists Perlis, Cusin, and Fava tried to find out. The large STAR*D antidepressant trial recruited people with depression, but it gave everyone the Psychiatric Diagnosis Screening Questionnaire (PDSQ), amongst many other measures. This helpfully included six items on "mania symptoms", which correspond pretty closely to the DSM-V proposed "mixed" features.

Perlis et al found that depressed patients who reported experiencing these "mixed" items had a better response to antidepressant treatment. The more mixed symptoms, the more likely they were to get better on the common SSRI citalopram, even adjusting for other variables.


That's the exact opposite of what you'd expect from a measure of "mixed states", as these are thought to be less responsive to antidepressants - maybe even caused by them. There was no placebo group, so it's unclear why they got better, but either way, it's unexpected; the authors declare themselves "surprised". Hmm. What a mystery...

Or maybe not. These manic symptoms are all things that you're not when you're depressed. The 6 items actually make a good summary of what depression, even agitated depression (except maybe #6) isn't.

So, one interpretation of these results is that people who endorsed these items just weren't depressed, at some point in the 6 months prior to doing the PDSQ. Assuming they were depressed at other points that means their mood was variable over time.

People whose depression is variable might well be more likely to recover than the ones whose depression was unrelenting.

Now Perlis et al do consider this -
further models were fit incorporating the IDS-C30 pleasure and reactivity items; results were essentially unchanged indicating that they are unlikely to be confounded by mood variability per se...
But this assumes that the IDS-C30 questionnaire is a good measure of mood variability in this sample. Maybe it's not, and these data are telling us so. I'd have said that's more likely than the idea that these people were actually both cheerful and depressed at the same time, which seems like a contradiction in terms.

Maybe I'm wrong, and these people did feel that, but the problem is, we can't tell, because no-one actually sat down and asked these people what was going on, or heard their account of what they meant by ticking both the "depressed" and "manic" boxes.

Did they experience a strange mixed emotional state in which they simultaneously depressed and happy? Did their mood see-saw from one day to the next? Or weekly, monthly? Were they depressed in the day and happier in the evening? Were they depressed, then back to normal, leading them to see the normal as a 'high', by comparison with the lows? Were they depressed when sober and happy when drunk? Vice versa? Are they experiencing normal ups and downs and interpreting them as 'mood swings' because they've become convinced, for whatever reason, that they have a mood disorder? Did they just have a poor command of English and weren't really trying to say what the highly-educated investigators assume they were?

Who knows? No-one, because no-one asked. Rely on questionnaire 'measures' (as if emotions can be measured) as a replacement for understanding, and you'll end up where this paper does - with a 'result' that's impossible to understand.

Don't seek, and ye shan't find.

It's not great news for the DSM-5 proposals, either way, although defenders could hold out hope that the differences between those criteria and the PDSQ measure might mean the DSM-5 will perform better...

 ResearchBlogging.orgPerlis, R., Cusin, C., and Fava, M. (2012). Proposed DSM-5 mixed features are associated with greater likelihood of remission in out-patients with major depressive disorder Psychological Medicine, 1-7 DOI: 10.1017/S0033291712000281

Friday, March 30, 2012

Urgency Knows no Bound

True it is, and truer it is still if that urgent matter has anything to do with your medical importance or issue. That is why you need to know where you could contact urgent care Washington DC center so that you could make a visit precisely by the same day you have been making your appointment. This care center or unit is vital to your health issues, and probably, to your own life as well. No one can deny accident when it befalls someone and when such bad thing does happen, you need to immediately know that you have your visit reserved even by the same day of your urgent appointment because of such accident.



But, medical urgency is just one issue out of many other crucial ones concerning your medical office. Come to think of it, what’s the point of having your appointment approved while there were no good doctors to visit your case? Therefore, make sure also, and always, that your medical group Washington DC office can at least provide you with two conditions out of necessity: the first one is that they are always available for such an immediate calling of an appointment, and the second is that they could provide the best doctors DC area of services to actually meet such urgent medical call.



It is by these two conditions only that you could make sure to yourself, and probably to your family as well, that no matter where you are going and when, there will always be some good hands to at least attend to your own bad happenings in case these unfortunate situations emerge. If it is true that urgent call to your medical purpose knows no limit, then that must have been caused by one premise only: dangers and accidents are always close to your own perimeter.

The Geography of Faces

How much can you tell about where someone comes from, just from their face?

The other day I was in London and came across a group of young people in Muslim attire who were waving (or in some cases wearing) a particular flag. I thought it was the Iranian flag, but, I thought, they didn't look Iranian. They looked more like Somalis, but it certainly wasn't the blue and white Somali flag. I decided that maybe they were some kind of pro-Iranian demonstrators, but I later worked out that it was the flag of the unrecognised state of Somaliland.

This got me thinking about how reliable these "they look they're from..." judgements are.

Clearly on a basic level, we can usually tell which continent someone's ancestors were from, in terms of the familiar "races" of Europeans, Africans, East Asians etc. But what about shorter distances?


Could you tell, just from looking at them (and setting aside dress, hairstyle, jewellery etc.) whether someone was from Spain as opposed to France? Korea or Japan? Russia or Germany?

I can only speak for England, but there's certainly a vague but widespread belief that every part of Europe has a  distinct 'look'. In the past, people were very fond of talking about that kind of thing; today, we're rather embarrassed by the idea but the belief lives on.

I don't know, but I'd be very surprised if there weren't analogous beliefs in other countries.

But how accurate are these folk beliefs, really?

Supposing you were the world expert on human faces - or suppose you were a supercomputer with face-recognition software and access to Facebook's entire dataset. How accurately could you place someone's origins on the map, on average? To within 1000 km? 100? With what degree of accuracy? In an ideal world, could the ultimate face-placer judge someone as French vs German 75% of the time? 90%? Or only slightly better than chance?

I suspect that if you researched this, you'd find that a supercomputer could do very well, in most parts of the world, but that the majority of actual people are less accurate than they think they are.

Thursday, March 29, 2012

3D fMRI Promises Deeper Neuroscience

A new approach to fMRI scanning offers a three-dimensional look at brain activation.

fMRI is already a 3D technique, of course, but in the case of the cerebral cortex - which is what the great majority of neuroscientists are most interested in - the 3D data are effectively just 2D images folded up in space.

The cortex can be thought of a big sheet crumpled up into the shape of a brain, and it's possible to use software to 'unfold' the cortex into a 2D map for the purposes of fMRI data visualization. It's more informative because it shows you which areas are closest to each other.

But the cortex isn't really a sheet. It's more like six sheets stacked up - the cortex is formed of six layers, each with distinct cell types, connections, and functions. The difference between Layer III and Layer V of a particular cortical area is, in some ways, as important as the difference between two adjacent areas, but fMRI can't distinguish them because they're too close together.

Until now. In a new paper, Minnesota neuroscientists Olman et al say that they've given fMRI a  third dimension - Layer-specific FMRI reflects different neuronal computations at different depths in human v1.

They used a powerful 7 Tesla MRI scanner and a T2-weighted 3D GRASE pulse sequence that provides extremely high spatial resolution (0.7 mm - whereas 3 mm is the fMRI standard). The trade-off was that they were only able to scan a small chunk of the brain, namely the primary visual cortex. However, this is a good place to start, because it has a very well-understood layering system.

Does it work?

Probably, although the data they present are a little messy. By showing volunteers various kinds of pictures, they tried to find evidence of layer-specific visual cortex activation. However, most of the stimuli they used activated all layers equally. In my view the best evidence for layer-specific results was this, from two people -

Showing that the upper layers of the cortex were more activated by colourful stimuli that activate "P cells" compared to rapidly changing stimuli that act on "M cells".

We'll need more data to be sure that this technique works, but if it does, it promises some awesome science in the future. Still, it's not all good news for us neuroscientists. We'll have to relearn all the facts about cortical layers that most of us studied in Neuroscience 101 and then promptly forgot about.

Someone remind me, is Layer I or VI the top one...?

ResearchBlogging.orgOlman CA, Harel N, Feinberg DA, He S, Zhang P, Ugurbil K, and Yacoub E (2012). Layer-specific FMRI reflects different neuronal computations at different depths in human v1. PloS one, 7 (3) PMID: 22448223

Tuesday, March 27, 2012

Broken Hearts and Broken Livers

In a new paper, Beyond the Blues, German psychologists Postert et al discuss how the Hmong people of South East Asia talk about sadness - or rather, how they don't, because they don't really have a word for it.



Based on anthropological fieldwork in a number of Hmong communities in Laos, the focus of this article is on the Hmong term tu siab, literally "broken liver". This is usually translated as "sadness" in the dictionaries, but the authors say that, although it is certainly the closest thing the Hmong have to a word meaning sadness, it is not the same because:
The instance of becoming ‘sad’ in Western contexts is that ‘something bad happened’... This may involve disappointment in personal relationships, but also other afflictions beyond the social realm. At the core of the emotional experience of ‘sadness’ are basic violations of values deeply embedded in Western conceptions of the individual. The afflicted individual feels resigned, passive, out of control...
In Hmong language, the concept of ‘broken liver’ has a strong emphasis on kin relations. It pertains very often to social situations of isolation and neglect from one’s kin including consanguines, patrilineal ancestors, or affines or their unmarried daughters in cases of romantic love. Persons with a ‘broken liver’ may have been voluntarily offended, excluded, or separated from these persons by bad fortune...

One of the highest Hmong values, a person’s vital social integration, is at stake here. However, a state of ‘broken liver’ is usually far from resignation. Contrary to the assumed passivity of a ‘sad’ individual, a ‘broken liver’ is an affective marker highly mobilising social relations and interdependencies ... having a member in one’s group whose liver is ‘broken’ appeals to the collective commitment of all relatives...

[like the English "guilt" and "shame", but unlike "sadness"] ‘broken liver’ demonstrates characteristics of a socio-moral emotion in everyday pragmatics of Laotian Hmong villages. It is typically evoked in a – conscious or unconscious – transgression of an important sociocultural rule. When a man is assumed not to accord to basic principles of social reciprocity by keeping substantial gains from an opium sale for himself, close relatives may develop signs of a ‘broken liver’ signalling their disapproval...
In short, the argument is that the Hmong "broken liver" differs from our "sadness" in being an active response rather than a passive reaction, a social statement rather than an individual feeling, in having a moral dimension, and so on.

But isn't that much like our "broken heart"?

"Breaking someone's heart" is a moral issue. It's not good to be a heartbreaker. If someone broke your friend's heart, you'd be angry. It's a specifically social emotion in the sense that it generally results from betrayal, abandonment, or disrepect. It's true that while the Hmong's "liverbreak" seems to extend to all close relationships, "heartbreak" often has a romantic connotation; but we do talk about "breaking your mother's or father's heart", so even that's not an absolute rule.

Consider this recently broken heart. Doesn't Tulisa's heartbreak fit the tu siab bill?

If so, then the main difference between Hmong and English terminology is that they only have a concept of 'heartbreak', and lack a general concept of 'sadness'. That's quite interesting, but I'm not sure how much to read into it. English doesn't have a word for 'déjà vu'; we had to borrow it from the French, but surely that doesn't mean that no English people ever felt it until the French explained it to them.

I'm no expert but judging by this paper, Hmong emotion terminology is really very similar to ours. The big difference is that the Hmong (in common with other East Asian cultures) link emotions to the liver, which to Westerners sounds silly, but it's no more silly than our talk about emotions being in the heart. They're both just metaphors.

Replace "liver" with "heart", and the following Hmong terms (listed in the paper) look very familar -
  • zoo siab  ‘pleased, happy’ (lit. ‘good liver’)
  • siab npau  ‘angry’ (lit. ‘liver boiling’)
  • chob siab  ‘inwardly offended’ (lit. ‘pierced liver’) etc.
Don't those all make sense? We talk about being "hearty" or "heartened", "taking heart" or having our "hearts lifted" when we're happy or confident; our "blood boils" when we're angry; we talk about feeling "cut", "stung", "pierced" by criticism or insults, and we suffer "heart-rending" traumas.

This is certainly a very interesting paper, but it didn't leave me feeling that the Hmong's emotional life is all that different to ours.

ResearchBlogging.orgPostert, C., Dannlowski, U., Müller, J., and Konrad, C. (2012). Beyond the Blues: Towards a Cross-Cultural Phenomenology of Depressed Mood Psychopathology, 45 (3), 185-192 DOI: 10.1159/000330944

Saturday, March 24, 2012

Obesity: Are We Food Obsessed?



According to a Professor Greg Whyte, writing in the Independent, when it comes to obesity, we've got an unhealthy obsession with diet. There is -
an incessant diatribe of diet propaganda purporting to possess the panacea for health... [but] the focus on diet linked to the volume and make-up of calories we consume has overshadowed the importance of the critical half of the energy balance equation: physical activity.
Clearly weight is, to a first approximation, a matter of calories in (diet) vs. calories out (physical activity). For any given diet, whether you lose or gain weight is determined by how much exercise you do, and vice versa. There's no such thing as "overeating" as such, there's just eating out of proportion to your level of exercise.

But have we forgotten that? Do we talk about the diet side of the equation more? I ran a few searches on PubMed and Google for "obesity" + various other terms to try and find out and it looks like Whyte is right.

See the graph above.

There does seem to be an imbalance, with "food" and "diet" being much more popular than "exercise" and "physical activity", both in terms of the scientific literature (PubMed), and more generally (Google). This is just a quick analysis of course, but it does suggest that when it comes to weight and obesity, we are more interested in calories in, than calories out.

I wonder why?

Friday, March 23, 2012

The Mystery of Trephination

Why did ancient peoples cut holes in their heads?


The Woman of Pritschoena died around 4,500 years ago in what's now Saxony-Anhalt, Germany. Her skeleton was discovered in 1913 by a local archaeologist. Thanks to being buried in a gravel pit, her remains are exceptionally well preserved.

The Woman's skull is a fine example of trephination - the practice of deliberately cutting holes in the skull. She was trephined not once but twice, as you can see in the images above taken from a paper just out. In both cases, the skull around the hole shows clear evidence of healing, which shows that the Woman must have survived the procedures.

Trephination is a historical mystery. Stone-age peoples around the world were fond of doing it - trephinations have been found on skulls from Europe, the Americas and Asia. The authors of this paper say that there are records of at least 800 trephined skulls.

In some parts of Europe, it seems that the survival rate for the operation was over 90%. It was a delicate procedure, with stone tools used to carefully scrape away and remove the bone without damaging the tissue underneath. But no-one knows why they did it. Some argue that it may have been used as a treatment for epilepsy or mental illness, but it's impossible to really know what it was meant to achieve.

ResearchBlogging.orgAlfieri, A., Strauss, C., Meller, H., Stoll-Tucker, B., Tacik, P., and Brandt, S. (2012). The Woman of Pritschoena: An Example of the German Neolithic Neurosurgery in Saxony-Anhalt Journal of the History of the Neurosciences, 21 (2), 139-146 DOI: 10.1080/0964704X.2011.575117

Wednesday, March 21, 2012

Brain Scanning - Just the Tip of the Iceberg?

Neuroimaging studies may be giving us a misleading picture of the brain, according to two big papers just out.


By big, I don't just mean important. Both studies made use of a much larger set of data than is usual in neuroimaging studies. Thyreau et al scanned 1,326 people. For comparison, a lot of fMRI studies have more like n=13. Gonzalez-Castillo et al, on the other hand, only had 3 people - but each one was scanned while performing the same task 500 times over.

Both studies found that pretty much the whole brain "lit up" when people are doing simple tasks. In one case it was seeing videos of people's faces, in the other it was deciding whether stimuli on the screen were letters or numbers.

With all that data, the authors could detect effects too small to be noticed in most fMRI experiments, and it turned out that pretty much everywhere was activated. The signal was stronger in some areas than others, but it wasn't limited to particular "blobs".

So conventional fMRI experiments may just be showing us the tip of the iceberg of brain activity. In a small study, only the strongest activations pass the statistical threshold to show up as blobs, but that doesn't mean the rest of the brain is inactive. It just means it's less active. The idea that only small parts of the brain are 'involved' in any particular task may be a statistical artefact.

In fact, I wonder if the whole idea of treating statistically significant blobs as different from nearly-significant areas is itself a form of the error of interacting effects?

As if that wasn't enough, Gonzalez-Castillo further show that there are lots of activations in the brain - even to very simple stimuli - that might go undetected in conventional studies, because they don't follow the time-course predicted by the usual models.

Have a look -


This shows the average neural activation from various regions of the brain during a letter-number task. The two areas I've highlighted in red are the primary visual cortex, and they do follow the expected 'boxcar' pattern - the brain is active when the stimuli are on the screen, inactive when they're not. But you can see that all kinds of other brain areas are also responding to the stimuli - just in different ways.

For example, the left primary motor cortex was activated during the task. That area controls the right hand, and that makes sense, as people responded by pressing buttons with the right hand. But interestingly, the same area on the other side of the brain was deactivated at exactly the same time, even though people weren't doing anything with their left hand.

These papers illustrate the fact that conventional fMRI is a blunt instrument that often only tells us about the most straightforward events that happen in the brain. A bit like how we only hear the shouts and screams from through our neighbor's walls, not their normal conversations, which aren't loud enough to reach our ears.

That's the bad news, but every blob has a silver lining. fMRI is clearly more powerful than most neuroscientists have realized, and this holds out hope for cracking some of the trickiest questions. As Gonzalez-Castillo et al put it
This result helps narrow the gap between thousands of fMRI manuscripts showing limited activation in response to tasks and cognition theories that defend that cognition—understood as the process of “configuring the way in which sensory information becomes linked to adaptive responses and meaningful experiences”—can only result from the distributed collaboration of primary sensory, upstream and downstream unimodal, heteromodal, paralimbic, and limbic regions... [we were able to] switch from a regime where activity detection relates primary to sensory processing to a more sensitive regime, where activity detection includes also cognitive processes with subtler BOLD signatures.
Link: See also the interesting discussion here: Surely, God loves the .06 (blob) nearly as much as the .05.


ResearchBlogging.orgThyreau, B., Schwartz, Y., Thirion, B., Frouin, V., Loth, E., Vollstädt-Klein, S., Paus, T., Artiges, E., Conrod, P., Schumann, G., Whelan, R., and Poline, J. (2012). Very large fMRI study using the IMAGEN database: Sensitivity–specificity and population effect modeling in relation to the underlying anatomy NeuroImage DOI: 10.1016/j.neuroimage.2012.02.083

Gonzalez-Castillo, J., Saad, Z., Handwerker, D., Inati, S., Brenowitz, N., and Bandettini, P. (2012). Whole-brain, time-locked activation with simple tasks revealed using massive averaging and model-free analysis Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1121049109

Saturday, March 17, 2012

Personality Without Genes?


According to a paper just published (but available online since 2010), we haven't found any genes for personality.

The study was a big meta-analysis of a total of 20,000 people of European descent. In a nutshell, they found no single nucleotide polymorphisms (SNPs) associated with any of the "Big 5" personality traits of Neuroticism, Extraversion, Openness to Experience, Agreeableness and Conscientiousness. There were a couple of very tenuous hits, but they didn't replicate.

Obviously, this is bad news for people interested in the genetics of personality. But I wonder if the implications are even wider -

We know that there are SNPs associated with physical traits like height, weight, hair colour, eye colour, and the risk of various diseases. If none of those SNPs are associated with personality, then none of those traits are causally associated with personality.

"Short man syndrome"? A myth. Rod Stewart was wrong about blondes. There's no such thing as a "fat personality". And so on. Maybe that's not surprising, but more generally, the implication would be that the genes we inherit have no direct or even indirect influence on our personality, which is a pretty radical conclusion when you think through it.

I'm making some assumptions here. Maybe some genes are correlated with personality, but the currently popular "Big 5" approach is just a poor way of measuring of personality. It could also be that there are so many interacting genetic and environmental effects on personality that any given effect is tiny by itself, and even bigger sample sizes, or multivariate data analysis, would be needed to detect such effects.

ResearchBlogging.orgde Moor, M., et al. (2010). Meta-analysis of genome-wide association studies for personality Molecular Psychiatry, 17 (3), 337-349 DOI: 10.1038/mp.2010.128

Thursday, March 15, 2012

The Blinking Brain - A Problem For fMRI?

Every time we blink, a wave of activity sweeps through our brain - and this could be a serious problem for some fMRI researchers.


French neuroscientists Hupé et al report on A BOLD signature of eyeblinks in the visual cortex. They found that spontaneous blinks are associated with a neural activation pattern over the occipital cortex areas responsible for processing vision.

In many ways this is not surprising - when you blink, everything goes dark, and then lights up again, all within a fraction of second, which means that blinks are a kind of very dramatic visual stimulus, equivalent to a big black object suddenly appearing and then vanishing again. However, it's long been believed that blink suppression mechanisms in the eye and brain somehow block out the responses that would otherwise happen during a blink.

Don't be so sure, say Hupé et al. In an elegant experiment, they showed volunteers a standard set of visual stimuli during fMRI scanning, while recording blinks using an eye tracking camera. Then they simply treated the blinks as events, and used standard analysis methods to find neural activation associated with them.

Blinks caused a significant BOLD response over a number of "visual" areas.

Compared to the "real" visual stimuli in the task, the blink signal was less extensive, but no less strong.

So what? The great majority of fMRI experiments don't use eyetracking to measure blinks, so this study raises the scary possibility that blinks could lie behind some of the "stimulus-related" activations that we all know and love. It would be a problem if subject blinks were correlated with the stimuli or tasks, which they might be, because blink rate may vary with our psychological state.

I don't think we should be too worried yet. The blink blobs were essentially confined to parts of the visual cortex. So any study that's not focussed on vision is probably in the clear (although that's just the average response: in some individual subjects, the activations were a lot wider.)

However, as the authors point out, there is a risk that alterations in blink rate, caused, perhaps, by emotional or cognitive stress, might be wrongly "found" to be causing visual cortex activation, which might call into question claims of "top-down" influences on early visual cortex... oh dear.

ResearchBlogging.orgHupé, J., Bordier, C., and Dojat, M. (2012). A BOLD signature of eyeblinks in the visual cortex NeuroImage DOI: 10.1016/j.neuroimage.2012.03.001

Tuesday, March 13, 2012

The Age of ADHD

Diagnosed rates of ADHD in American children have skyrocketed in the past 20 years, and use of medication such as Ritalin and Adderall has increased by an even greater amount.


So says a report just out in Clinical Pediatrics, using data from the major US National Ambulatory Medical Care Survey (NAMCS). The rate of office based visits (i.e. visits when a doctor saw or treated a patient, outside of a hospital) was the main outcome measure. The authors looked at the number of visits reporting a diagnosis of ADHD, and also the number of ADHD visits also involving psychostimulant medication, for kids aged 5 to 18.

See above - that's a big increase, and a lot of visits (remember the Y axis is visits per 1000 children per year.) One thing to remember is that the increase might not mean that there are more patients with ADHD -  it could reflect more visits per patient, but that seems unlikely to account for all of it.

A few thoughts -

The rise of ADHD parallels the recent increase in autism diagnoses. Yet people don't seem to be talking about it to the same extent. We're always hearing about "the autism epidemic", the "Age of Autism". Why aren't we equally concerned about the ADHD 'epidemic'? Why don't we have minor celebs railing about vaccine-damaged ADHD children?

Next - like autism - it seems likely that much or all of the increase is due to changes in awareness and willingness to diagnose the disorder. If so, logically, ADHD must either be being seriously overdiagnosed now, or was being seriously underdiagnosed previously. Or both.

This is especially true of boys. Rates in girls rose pretty much steadily for 15 years but in boys, there have been swings up and down, although the overall trend is still upward. It's always possible that this is a quirk of the NAMCS dataset, but if not, it suggests that ADHD diagnosis in boys is especially prone to changes in diagnostic fashion.

It's tempting, actually, to see the recent fall in boys with ADHD as a consequence of the rise of autism diagnoses over the same period. Autism is predominantly diagnosed in boys and the two disorders are often comorbid.

Maybe, boys are now getting autism diagnoses which are then felt to explain their behaviour, meaning that they don't "need" an ADHD diagnosis, which previously they would have got. But that's just my speculation, and it's probably reading too much into the data, because there was also a peak in 1994 which I can't see any explanation for.

ResearchBlogging.orgSclar DA, Robison LM, Bowen KA, Schmidt JM, Castillo LV, and Oganov AM (2012). Attention-Deficit/Hyperactivity Disorder Among Children and Adolescents in the United States: Trend in Diagnosis and Use of Pharmacotherapy by Gender. Clinical pediatrics PMID: 22399571

Saturday, March 10, 2012

The Case of the Phantom Phantom Finger

A "phantom limb" is the sensation that an amputated limb (or other body part) is still present.

They can be distressing, especially when they're accompanied by pain in the "limb" which is not uncommon. The leading theory of why they happen is that the brain areas that used to receive sensations from the lost appendage respond to input "spilling over" from nearby brain regions.

Anyway, a phantom limb is bad enough, but a paper just out reports on the case of a phantom finger that was never there in the first place.

A woman, RN, was born with an abnormally short right arm; her right hand was also malformed, with a shortened thumb, no index finger, and immobile ring and middle fingers. Only the little finger was present and correct.

At the age of 18, she then had the misfortune to suffer a car crash; the injuries meant that her right hand had to be amputated. She soon found herself experiencing a phantom hand - with all five fingers. Three of them felt like they were normal length; the "thumb" and "index finger" felt shorter than normal, but remember that the original hand had no index finger at all.

RN also suffered from phantom pains and was distressed by the fact that the "hand" felt like it was bent into an impossible posture. Fortunately, the mirror box technique was able to set things right; while the phantom was still there, it was no longer painful, and all the fingers were the right length.

This is a remarkable case. The authors of the paper, Paul McGeoch and V. S. Ramachandran (perhaps the best known phantom-limb expert) say that it could mean that we're born with an innate, hard-wired "body plan" in the brain, regardless of the way our body actually develops -

While RN’s phocomelic [abnormal] hand was present she did not experience any phantom sensations. Thus, although severely deformed, the mere presence of the hand was sufficient to inhibit the innate representation of her normal hand and prevent any phantom sensations from emerging, presumably from tactile, proprioceptive and visual feedback... the amputation of her hand appears to have disinhibited these suppressed finger representations in her sensory cortex and allowed the emergence of phantom fingers that had never existed in her actual hand.
They do consider alternative explanations though -
Clearly it is beholden on us to consider whether RN’s descriptions do not describe a genuine sensory experience, but rather are confabulatory in origin. We do not believe this to be the case, since if she were confabulating then it would seem unlikely that she should report that her phantom hand had five fingers, but that they were not all of normal length; if this were simply ‘wishful thinking’ then she would likely claim to have five normal length fingers. This appears a persuasive, although not definitive argument, against confabulation.
Seems like a fair assessment.
I don't even know what you'd call the phantom "index finger". A pseudo-phantom? A phantom phantom?

ResearchBlogging.orgMcGeoch, P., and Ramachandran, V. (2012). The appearance of new phantom fingers post-amputation in a phocomelus Neurocase, 18 (2), 95-97 DOI: 10.1080/13554794.2011.556128

Creating Simple Table Lifts Mechanis using Linear Actuators

Modern furniture design is rapidly developing; there are new designs that sometimes involve technology to support its utility. The invention and creation of new furniture is basically become the solution for some furniture application problem. Small room and the need to create simple and minimalist things become the idea for creating furniture that can provide flexibility in its usage. The creation of television lifts is the solution for anyone who wanted to have large television set in a small room, the television set will be mounted on a construction that can store and hide the television. The application of construction with actuators application for dynamic furniture design provides simple movement for lifting or mounting objects like television or monitor.

Another popular application of linear actuators for furniture is the table lifts. The linear actuator application of this furniture design is intended to provide lifting mechanism to make dynamic movement of the table. There are actuators may involve in the mechanism to create more complex movements. This application can lift small or large tables for household or office furniture application. The table lifting set can be controlled using a control box or wired remote control that can operate using AC or optional DC power. The lifting application for furniture using linear actuator can provide more flexibility in furniture design.

Wednesday, March 7, 2012

Ketamine - Magic Antidepressant, or Expensive Illusion?

Not one but two new papers have appeared from the Carlos Zarate group at NIMH reporting that a single injection of the drug ketamine has rapid, powerful antidepressant effects.

One placebo-controlled study found a benefit in depressed bipolar patients who were already on mood stabilizers. The other found benefits in treatment-resistant major depression, though ketamine wasn't compared to placebo that time. Here's the bipolar trial:


There have now been several studies finding dramatic antidepressant effects of ketamine, a compound that all journalists seem contractually bound to call either a or a "club drug" or a "horse-tranquilizer". Great news?

If you believe it. But hold your, er, horses... there's a problem. As I said almost 3 years ago about one of the earlier ketamine trials:
In theory, the trial was double blind - neither the patients nor the doctors knew whether they were getting ketamine or placebo. But you'll know when you've been injected with 0.5mg/kg ketamine. You get high. That's why people take it [recreationally]. The study can't really be called double blind.
To their credit, Zarate et al did acknowledge this, and suggested that in future ketamine could be compared to another drug which produces noticeable effects. But they really should have done that to begin with.
It's now 2012, and there have still not been any published studies comparing ketamine to an active comparator i.e. a different drug that produces noticable psychoactive effects, to avoid unblinding. This means it's 12 years since the initial pilot report on ketamine in depression, and 6 years since the first large trial appeared.

The authors of the 2006 paper themselves wrote that "limitations in preserving study blind may have biased patient reporting... One potential study design in future studies with ketamine might be to include an active comparator" and suggested amphetamine for the big role.

Good idea. But six years later, we're still waiting. Which is really a bit silly. There have been dozens of papers written about the possible antidepressant effects of ketamine, from human trials to mouse work. That's a lot of research dollars (and dead mice) on something that might just be an active placebo.

Looking at the registered ketamine research on clinicaltrials.gov, I found that four active-comparator ketamine trials are in the pipeline (1,2,3,4), plus one cancelled (5). Only one is for depression though. The others being for OCD, cocaine dependence and suicidal ideation.

In all of these trials a benzodiazepine is the active comparator. Is that a good idea? Well, it's certainly better than nothing, but I wonder.

An active comparator has to "make an impression" on the patient equal to that produced by the real drug.  The null hypothesis, remember, is that ketamine has no specific antidepressant effect. That means it produces improvement through a combination of a) the placebo effect (expectation) and b) non-specific psychoactive changes.

More on that second one: any psychoactive drug might relieve depression by "taking your mind off it" and a change in mental state, as provided by a drug, also provides a demonstration that "I won't always feel this way". By showing that states of consciousness are products of brain chemistry, almost any drug could therefore offer a "glimmer of hope" to the depressed. If all this sounds very subjective, it is, but that's the point. Psychiatry is.

Would a benzo make as big an impression as 0.5 mg/kg ketamine IV? It's impossible to predict, really; so we'd need to ask people about the subjective strength of the drug effect. Personally, I worry that a lot of people just get sleepy on benzos and don't really feel much, so I'd prefer they used something a bit more hard-hitting like amphetamine, but maybe that's just me.

There's a deeper problem though. Suppose our ketamine-benzo trial finds no difference between ketamine and benzo. A critic could say, ah, but maybe it was just a "failed trial", so it doesn't overturn the positive studies. The patients weren't properly diagnosed, or weren't depressed enough, or were too depressed, etc.

Nitpicking such differences between studies is a well-practiced art.

Critics could complain in other ways if the study did find a benefit of ketamine. As I see it, the only way to settle this once and for all is to do a three-way randomized controlled trial - inactive placebo vs. active comparator vs. ketamine.

That way, if it's a failed trial, we'd know: there'd be no difference between ketamine and the inactive placebo. If there was a difference, but the active comparator was just as good as ketamine, that means it was all about nonspecific effets. Finally, if ketamine was better than the other two conditions, we could be pretty confident it was really working.

Also important is the question of volunteer expertise; subjects shouldn't be able to tell what drug they're on, but people who'd taken ketamine and/or the comparator drug before might be able to do that, so you'd want naive volunteers.

In conclusion: It's possible that ketamine has no specific antidepressant effects. To find out we ideally need a three-way trial, with both active and inactive comparators, careful monitoring of subjective drug effects and patient knowledge and expectations. Until that happens, I will be skeptical of ketamine in depression.

This is not because I just think it's impossible. Ketamine profoundly affects the brain in ways that we don't understand. I've suffered depression and I know it can come and go in a matter of minutes. So I think it's entirely possible that it works - but it's also possible that it's a nonspecific effect.

Look. I really want to know the answer to this. Both as a neuroscientist, and as a depression sufferer, this is very important to me. That's why we urgently need a good trial.

Link: See also the discussion and the comments over at The Neurocritic and this Scientific American piece which is pretty good except that it doesn't cover the active placebo issue.


ResearchBlogging.orgZarate CA Jr, Brutsche NE, Ibrahim L, Franco-Chaves J, Diazgranados N, Cravchik A, Selter J, Marquardt CA, Liberty V, and Luckenbaugh DA (2012). Replication of Ketamine's Antidepressant Efficacy in Bipolar Depression: A Randomized Controlled Add-On Trial. Biological psychiatry PMID: 22297150

Ibrahim, L., et al. (2012). Course of Improvement in Depressive Symptoms to a Single Intravenous Infusion of Ketamine vs Add-on Riluzole: Results from a 4-Week, Double-Blind, Placebo-Controlled Study Neuropsychopharmacology DOI: 10.1038/npp.2011.338

Tuesday, March 6, 2012

Free Will: A Dangerous Idea?

The British Journal of Social Psychology has published a fiery rebuke to psychologists who argue that belief in free will makes people more ethical.



Recent much-publicized studies have claimed that scepticism about free will makes people behave less morally. "Disbelief in Free Will Increases Aggression and Reduces Helpfulness" as the title of one of hese papers puts it.

In his article (free pdf), British 'independent researcher' James B. Miles says that these experiments are flawed, because they didn't distinguish between determinism (lack of free choice) and fatalism (lack of the ability to change events).

More fundamentally, though, Miles says that free will is used to justify things, such as punishment and poverty, that would otherwise be seen as scandalous -
Western law recognizes that the penal system is so harmful to the existing life and future opportunities of persons that to convict requires evidence beyond a reasonable doubt. Yet libertarians provide no objective evidence whatsoever for the existence of free will, and therefore no apparent justification for the mass poverty and brutal punishments that belief in libertarian free will often brings with it. The leading legal theorist Stephen J. Morse freely admits that harsh prison conditions and execution are only morally tolerable where the presumption of free choice exists...
...In June 2009, the Joseph Rowntree Foundation published research showing that up to 83% of Britons think that ‘virtually everyone’ remains in poverty in Britain not as the result of social
misfortune or biological handicap but through choice (Bamfield & Horton, 2009, p. 23; 69% of those surveyed agreed with the statement and an additional 14% were unsure but did not disagree.) Because of their belief in the fairness of ‘deserved inequalities’, such respondents were discovered to have become almost completely unconcerned with the idea of promoting greater equality while at the same time asserting that Britain was a beacon of fairness that offered opportunities for all...
...Free will may just be the primary excuse many use to legitimize a contempt for the poor that would exist independent of their professed belief in free will, but free will assertion nonetheless provides the ethical fig leaf for such contempt that would be far harder to rationalize (and therefore tolerate) without the myth of free will.
This is a polemical piece (remarkably so, for an academic journal), and clearly this is only one side of the story, but it's hard to deny that he has a point: there's a dark side to the belief in free will. If you doubt free will, and yet praise the myth of it, as some scientists seem to be doing, you need to accept that you're condemning some people (prisoners, most obviously) to suffer as a result "through no fault of their own".

Personally, I think the great majority of people do believe in free will and always will - the arguments against it have been around for millenia, they're as convincing as they'll ever be, and they haven't convinced most people, however irrational that might make most people. So I think the debate over belief in free will is academic; it's not going away.

 ResearchBlogging.orgMiles JB (2011). 'Irresponsible and a Disservice': The integrity of social psychology turns on the free will dilemma. The British journal of social psychology / the British Psychological Society PMID: 22074173

Saturday, March 3, 2012

The World Mental Health Missionaries?

Is research on the global distribution of mental health problems a kind of modern-day missionary work?

Maybe, says Australia's Dr Stephen Rosenman in a provocative paper: Cause for caution: culture,sensitivity and the World Mental Health Survey Initiative.

The World Mental Health Survey (WMHS) is a huge World Health Organization project that aims to measure the rates of various psychiatric disorders in countries around the world. The WMHS has produced a great deal of data, but Rosenman points out that this assumes that people all over the world suffer from the same psychiatric disorders (and display them in the same ways) as the Americans and Europeans about whom the diagnostic manual was originally written.

The surveys translated the diagnostic criteria into the local languages, of course, but that doesn't mean they were appropriate to the local cultures.

He suggests that all this is a bit like missionaries who went around translating the Bible and trying to convince people to read it -
Looked at with a less admiring eye, the [WMHS] resembles in some ways the missionary movements of the last two centuries. Like the missionaries, the organisers are committed, selfless people of extraordinary goodwill who have come to poor countries from cultures at the apogee of their wealth, prestige and intellectual power.
They bring an evolved and highly developed system of thought. They set about delivering the fruits of that to the people. The survey initiative has engaged the leaders of the profession in the countries and, in a sense, has converted them to this view of psychopathology.
It is difficult to know if their success is due to the power of the ideas they brought, or the power and prestige of the cultures they came from, or from their technique of taking over both the centre and the contours of the beliefs of a culture. Missionaries brought a ‘colonisation of consciousness’... etc.
He does goes on to say though, "I do not want to push the missionary analogy too far" which is wise I think; there are important differences and other analogies are equally apt.

The paper's a good read though. It refers to Crazy Like Us, a book I'm fond of.

Although Rosenman doesn't cite another important source (cough cough): he points out that the WMHS national estimates of rates of depression don't correlate at all with national suicide rates, which is seriously odd -
According to the CIDI [the psychiatric interview used in the WMHS], Japan, for example, has one-third the rate of mood disorders (3.1%) seen in the USA (9.6%). At the same time, Japan’s suicide rate (20.3/100,000) is twice that of the USA (10.8/100,000). Suicide rates seem to have almost no relationship with CIDI diagnoses of affective disorder... Suicide, of course, is complexly shaped by the culture but are we to believe that answers to the CIDI are any less culturally determined and which is to be considered the better index of disorder?
I made the very same point using the very same datasets in 2009 (although I looked at 'all mental illness' rather than 'mood disorders').

ResearchBlogging.orgRosenman, S. (2012). Cause for caution: culture, sensitivity and the World Mental Health Survey Initiative Australasian Psychiatry, 20 (1), 14-19 DOI: 10.1177/1039856211430149