Wednesday, May 30, 2012

ADHD: Unhappy Birthday?

Earlier this year a major study of almost one million Canadian children found that rates of diagnosed ADHD - as well as use of ADHD medications like Ritalin - were higher in kids born later in the year.

This is strong support for the "immaturity hypothesis" - the idea that some children get a diagnosis of ADHD because they're younger than their classmates at school, and their relative immaturity is wrongly ascribed to an illness. In British Columbia, where the study happened, the cut-off for school entry is December 31st so those born in January will be a year older than their peers born in December.

Worrying stuff. Clearly this is very important, if true. The authors wrote:
These findings raise concerns about the potential harms of overdiagnosis and overprescribing.
Now a new paper published in the Journal of Attention Disorders rejects the immaturity hypothesis: Is the Diagnosis of ADHD Influenced by Time of Entry to School? The authors say no, it isn't - but their data don't actually tell us anything about that.

The esteemed Professor Joseph "Only God Can Judge Me" Biederman and colleagues from Massachusetts General Hospital looked at 562 kids born in August and 529 born in September; the cut-off in the USA is in the summer. Septembers are the oldest in their year, August the youngest.

Of all the children, 55% turned out to have ADHD. There was no difference in rates of ADHD, or the severity of ADHD, between the two groups. Phew! As J-Bo and his East Coast Posse write:
These findings do not support the developmental immaturity hypothesis of ADHD that states that children’s developmental immaturity leads to the inappropriate diagnosis of ADHD. Instead, these findings suggest  that children with clinical thresholds of symptoms of  ADHD are afflicted with this disorder.
But this just doesn't follow. These results seem to undermine the Canadian data, but they don't. It's comparing apples and oranges.

The difference is that Biederman et al diagnosed all of the cases of ADHD themselves, and they are one of the world's leading ADHD clinical and research teams. All their data show is that being born in August doesn't fool them into inappropriately diagnosing ADHD.

Sadly, not every child has the privilege of being seen at Mass Gen and having "a comprehensive assessment battery that included structured diagnostic interviews; measures of cognitive, interpersonal, familial, and educational functioning; and examination of patterns of familiality of ADHD." These lucky kids got a veritable alphabet soup of psychiatric measures, from the K-SADS-E-IV to the SAICA.

The whole claim of the immaturity hypothesis is that children who are young in their year A) don't actually have more ADHD, but B) tend to get diagnoses when they shouldn't i.e. after an inadaquate assessment, say by a family doctor, on the advice of a parent or teacher. This study confirms A). It says nothing about B).

At best it shows that younger children aren't being referred to Mass Gen just because they're young. But we're not told who makes those referrals, what their qualifications are, or what systems are in place to filter out inappropriate referrals, so this tells us very little; it also seems that many of the kids included here weren't referred on suspicion of ADHD at all, but were enrolled in research studies.

But hey, the Journal of Attention Disorders has a Biostatistical and Methodology Editor, Stephen Faraone. Did he approve of this article? Probably - he's the senior author.

I should stress that none of this proves that "ADHD doesn't exist" or anything so radical. All it means is that ADHD is sometimes diagnosed badly. Which really shouldn't surprise anyone, because doctors are human and make mistakes sometimes.

ResearchBlogging.orgBiederman, J., Petty, C., Fried, R., Woodworth, K., & Faraone, S. (2012). Is the Diagnosis of ADHD Influenced by Time of Entry to School? An Examination of Clinical, Familial, and Functional Correlates in Children at Early and Late Entry Points Journal of Attention Disorders DOI: 10.1177/1087054712445061

Morrow, R., Garland, E., Wright, J., Maclure, M., Taylor, S., & Dormuth, C. (2012). Influence of relative age on diagnosis and treatment of attention-deficit/hyperactivity disorder in children Canadian Medical Association Journal, 184 (7), 755-762 DOI: 10.1503/cmaj.111619

Tuesday, May 29, 2012

Science from WOW to FFS

I've mentioned before that I make notes on the scientific papers I read, which I've found really helps me to remember them. Sometimes these are quite extensive, but mostly it's just a one sentence summary of the main idea, along with a brief comment.

Sometimes a very brief one. After doing this for a while, I've arrived at a scale of three letter comments. From awesome to awful:


Those of you who follow me on Twitter may find this helps to interpret some of my shorter posts...

Thursday, May 24, 2012

Finding the Best Learning Site for Treating Addiction

Finding the best treatment for addictive problem always becomes something easier said than done. Of so many treatment programs you could easily find out there, there really are fewer programs that could truly be dependable. Therefore, your job is to find that trustable program, as this will become your most effective method to fight even the worst kind of addiction problem. However, there is, indeed, a problem to this matter: how are you going to find such best treatment to effectively cure the addiction that probably you or any member of your family suffers from?

In direct relation to such question, it is advisable that you seek for any institution or rehab center that offer these dual diagnosis treatment programs. Now, if you find yourself even in more questions after hearing such a phrase, it is then recommended that you seek the knowledge in the best place that you could visit to. This place should give you everything, from addiction to methods of treating it (including post-treatment programs, if necessary), in order to enable yourself to generate the correct info, and thus understanding, about addiction.

As to your best treatment, that site should also be able to give much info about that dual diagnosis method to ensure that you do know what it means before you could decide whether this is the kind of treatment that you need for. Also of your particular important is that your learning site should be able to cater your need of information regarding addiction by providing some ways of consulting or discussing the treatments, or anything concerning the addiction, with some experts in relevant field. When you could find such a site, it is certain that you will gain much benefit from such place: if you know where is the best place to find the best info, your chance to get the best treatment has been made greater than ever.

Wednesday, May 23, 2012

Rich People May Not Be So Unethical

There was quite the stir a few weeks back about a psychology paper claiming that rich people aren't very nice: Higher social class predicts increased unethical behavior.

The article, in PNAS, reported that upper class individuals were more likely to lie, cheat, and break traffic laws.

However, these results have been branded "unbelievable" in a Letter to PNAS just published. Psychologist Gregory Francis notes that the paper contains the results of 7 seperate experiments, and they all found statistically significiant socioeconomic effects on unethical behaviour.

Those 7 replications of the effect "might appear to provide strong evidence for the claim" - one study good, 7 studies better, right? - but Francis says that actually, it's too good to be believed.

Each of the studies was fairly small, and the effects they found were modest, and only just significant. So the observed power of the studies - the probability that a study of that size would detect the effect that they did, in fact, find - was only about 50-88% in each case.

Think of it this way: if you took a pack of cards and discarded half of the black ones, then shuffled the remainder, a random card from the deck would most likely be red. But even so, it would be unlikely that you'd pick seven reds in a row.

The chances of all 7 studies finding a positive result - even assuming that the effect claimed in the paper was real - is just 2%, by Francis's calculations.


He concludes "The low probability of the experimental findings suggests that the data are contaminated with publication bias. Piff et al. may have (perhaps unwittingly) run, but not reported, additional experiments that failed to reject the null hypothesis (the file drawer problem), or they may have run the experiments in a way that improperly increased the rejection rate of the null hypothesis (4)".

What might have happened? Maybe there were more than 7 studies and only the positive ones were published. Maybe the authors peeked at the early data before settling on the sample size, or took other outcome measures that showed no effect and went unreported. See also the 9 Circles of Scientific Hell.

Or maybe not. Piff et al respond in their own Letter, firmly denying that they ran any other unpublished experiments, and saying that they "scrutinized our data collection procedures, coding protocols, experimental methods, and debriefing responses. In no case have we found anything untoward." They go on to criticize the method Francis used to get his magic 2% figure, which they point out relies on some debatable assumptions.

Even if you buy the 2% figure, it doesn't mean that the true effect is zero; it might be real, but exaggerated. Ultimately it all becomes rather murky and subjective, which is why I think we need preregistration of research, which would prevent any possibility of such data fiddling, and also remove the possibility of false accusations of it... but that's another story.

ResearchBlogging.orgFrancis, G. (2012). Evidence that publication bias contaminated studies relating social class and unethical behavior Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1203591109

Tuesday, May 22, 2012

Gaydar Works (A Bit, On Facebook)

The media are gleefully reporting a recent paper showing that "gaydar is real" - we can tell who's gay just by looking: The Roles of Featural and Configural Face Processing in Snap Judgments of Sexual Orientation

While it's a fine paper, I'm afraid that the results really aren't that exciting.

American undergraduate students were able to classify people as gay or straight with better than chance accuracy, based purely on photos of their face. For male photos, the hit rate was 0.57; for women it was better with an accuracy of 0.65.

However, that's on a scale where you get 0.50 by flipping a coin. So saying that gaydar is '65% accurate', as almost everyone has, is misleading. Still, the numbers seem solid. The sample sizes were large and the effect was replicated very convincingly in two experiments.

However... this tells us very little about real world "gaydar", and it wasn't intended to. There are reasons to think it could underestimate the accuracy:
  • Most importantly - people only saw the pictures for 50 milliseconds each. 1/20th of a second. Followed by a backward mask. That's right on the threshold of conscious perception, almost 'subliminal' but not quite. With longer viewing times, they might have done better.
  • All the faces were black and white photos with the hair and ears cropped out (see above - and I think those two photos from the paper are the authors, although I may be wrong!). Anyone with facial hair, glasses, or any other 'accessories' wasn't used. In the real world, we have that extra information.
  • In real life, we get clues from facial expressions, body language, voice, clothes. You could argue that these are being used (consciously or not) specifically as signals of sexuality, so they don't count as 'gaydar' - but more on that later.
 But it could also overestimate gaydar's powers:
  • These were photos that people chose for their Facebook profiles. We all know how much effort some people put into that choice. We also know that different photos of the same person can often seem like two different people. Your Facebook pic is probably the most "selected" photo of you in existence. It would be better - but also much harder - to use passport photos.
  • All of the gays in the study were out of the closet: they broadcast their sexuality on Facebook. But lots of gay people don't do that. Now those cases are probably where 'gaydar' is most likely to be of interest to most people, I think; those people might be harder to spot.
As far as I can tell, this study wasn't intended to "prove that gaydar works". It was meant to examine how it works, by seeing whether it works very quickly (yes - in 50 ms in some cases). The authors also tested how accuracy was changed by flipping the photos upside down; this reduced accuracy but it was still well above chance.

Ultimately, we need to ask what "gaydar" means and why we find it so interesting.

On a superficial level, it just means being able to sense, from someone's appearance, if they're gay. That certainly does 'work' - if you see a guy coming out of a gay club in a tight pink Boy George t-shirt then yeah, he's probably gay. But he's (effectively) told you so, by being in that club and wearing those clothes, so that's not very interesting. That's an extreme case, but clearly people advertise their sexuality (and much else of course) all the time. Gaydar, in a weak sense, is just perception.

I think what makes "gaydar" intriguing is the stronger idea that it can go beneath such adverts. That we can see who's really gay, whether or not they admit it, even to themselves. If that were possible, then it would seem to mean that homosexuality is part of the essence of some people - in other words, that it's a biological trait.

So gaydar in a strong sense is risque. It calls to mind un-PC ideas such as physiognomy and would seem to validate various stereotypes which are the stuff of dirty jokes more than polite discussion.

Does gaydar in this strong, exciting sense exist? That's another question. This study doesn't tell us.

ResearchBlogging.orgTabak, J., and Zayas, V. (2012). The Roles of Featural and Configural Face Processing in Snap Judgments of Sexual Orientation PLoS ONE, 7 (5) DOI: 10.1371/journal.pone.0036671

Sunday, May 20, 2012

Plagiarism In Translation - A Dilemma

Last week I caught a plagiarist.

In a research paper published recently in a minor journal, I realized that the authors had directly copied several sentences from the Introduction to an earlier paper that I'd been reading recently. Even the references from the original were cloned. Given the context, it's extremely unlikely that they had permission.

This is an open and shut case of plagiarism. There's no dispute that plagiarism is bad. So I was about to write to the editor of the journal and the original authors when I looked up who the culprits were Polish.

This has happened before. A while back I detected plagiarism on a similar scale, again a very clear cut case, from some Brazilian authors.

I haven't reported either case. That's what this post is about.

Why not? It's nothing to do with the idea that in "other cultures", copying is regarded differently, so we need to make allowances for foreigners. That doesn't convince me. No, my concern is that as a native English speaker I have an unfair advantage over scientists from other parts of the world.

500 years ago Latin was the international language of science; a hundred years ago it was German, in many fields. Today, most scientific journals and all of the high-impact ones require papers to be written in English. That's the way it is, and there's no changing it, but I don't think it's fair.

Worse, many journals don't just demand English, they expect perfect English. Many editors and peer reviewers will throw out papers with clumsy phrasing or grammatical errors, regardless of where the authors are from, because "It's not my job to teach you English". In my own reviews I don't do this - if a paper is scientifically sound but the English is poor then I'll rewrite it. But I think I'm in a minority.

So you can see why non-Anglophones are tempted to copy and paste. Who knows exactly how common it is, but given that I've found two examples without specifically looking for it, it must be widespread.

Suppose you need a  "boilerplate" summary of some well-worn but complicated issue for your Introduction. And you do, nowadays, although they add little to most papers. You could write it yourself and the English might be bad, or you could copy it from a similar paper in a good journal and be sure...

Is that plagiarism? Yes. Is it illegal? Quite possibly, depending on the jurisdiction. But is it morally wrong? I don't think so.

You might say that it's stealing, and hence wrong. But is it so different to copying someone's paragraph, changing a few words and swapping some around, until it's "different" enough to pass a plagiarism check? All you've added in that case is your own linguistic coat of paint; you've still stolen the car. Yet even English speakers do that and get away with that all the time. Indeed, the banal, boilerplate "original" plagiarized text in my two examples could have started out that way.

I'd stress than I would have been writing to the journal in a flash if I thought there was any question of plagiarism of data, or of novel ideas, but there isn't.

I'm absolutely not encouraging this solution to the language problem. It really is a bad idea, because you'll probably get caught. The best scenario for you would be that the journal's plagiarism checker spots it and your paper won't get published. If you're unlucky, it will get through, and then one day down the line you'll end up on Retraction Watch. Don't do it, however tempting it may be.

But I can't feel any indignation at those who do it. It's hardly classy, but it's not malicious, selfish or damaging to science, and as such I struggle to accept that it's wrong. Which is why I have not reported my two cases.

I expect some people will disagree, so please feel free to comment. I may change my mind.

Thursday, May 17, 2012

Another Antidepressant Crashes & Burns

Yet another "promising" novel antidepressant has failed to actually treat depression.

That's not an uncommon occurrence these days, but this time, the paper reporting the findings is almost as rubbish as the drug: Translational evaluation of JNJ-18038683, a 5-HT7 receptor antagonist, on REM sleep and in major depressive disorder

So, Pharma giant Janssen invented JNJ-18038683. It's a selective antagonist at serotonin 5HT-7 receptors, making it pharmacologically rather unusual. They hoped it would work as an antidepressant. It didn't - in a multicentre randomized controlled trial of 230 depressed people, it had absolutely no benefits over placebo. A popular existing drug, citalopram, failed as well:

About the only thing JNJ-18038683 did do in humans was to reduce the amount of dreaming REM sleep per night. This REM suppressing effect is also seen with other antidepressants and this is evidence that the drug does do something - just not what it's meant to. Being charitable you could call this a failed trial.

Ouch! But it gets better. Unhappy that JNJ-18038683 bombed, Janssen reached for their copy of the Cherrypicker's Manifesto. This is a new statistical method, proposed by fellow Pharma company GSK in a 2010 paper, which consists of excluding data from study centres with a very high (or very low) placebo response rate.

Anyway, after applying this "filter" JNJ-18038683 seemed to do a bit better than placebo, but the benefit over placebo still wasn't statistically significant - with a p value of 0.057, the wrong side of the sacred p=0.05 line (on page 33).
Yet Page 33's "trend towards statistical significance" magically becomes "significant" - in the Abstract:
[with] a post hoc analyses (sic) using an enrichment window strategy... there was a clinically meaningful and statistically significant difference between JNJ-18038683 and placebo.
Well, no, there wasn't actually. It was only a trend. Look it up.

That aside, the problem with the whole filter idea is that it could end up biasing your analysis in favour of the drug, leading to misleading results. The original authors warned that "data enrichment is often perceived as a way of improperly introducing a source of bias... In conventional RCTs, to overcome the bias risk, the enrichment strategy should be accounted for and pre-planned in the study protocol." They should know, as they invented it, but Janssen rather oddly say the exact opposite: "This methodology cannot be included in a protocol prospectively as it will introduce operational bias in that scheme."


Anyway, even after the filter technique, citalopram didn't work either... bad news for citalopram, except, was it citalopram at all? This is really unbelievable: Janssen don't seem clear on whether they compared their drug to citalopram, or to escitalopram - a quite different drug.

They say "citalopram" in most cases, but they have "escitalopram" instead, in three places, including, mysteriously, in a "hidden" text box in that graph I showed earlier:

I'm not making this up: I stumbled upon a text box which is invisible, but if you select it with the cursor, you find it contains "escitalopram"! I have no idea what the story behind that is, but at best it is seriously sloppy.

Come on Janssen. Raise your game. In the glory days of dodgy antidepressant research, your rivals were (allegedly) concealing data on suicides and brushing whole studies under the carpet, to make their drugs look better. Despicable, but at least it had a certain grandeur to it.

ResearchBlogging.orgBonaventure, P., Dugovic, C., Kramer, M., De Boer, P., Singh, J., Wilson, S., Bertelsen, K., Di, J., Shelton, J., Aluisio, L., Dvorak, L., Fraser, I., Lord, B., Nepomuceno, D., Ahnaou, A., Drinkenburg, W., Chai, W., Dvorak, C., Carruthers, N., Sands, S., and Lovenberg, T. (2012). Translational evaluation of JNJ-18038683, a 5-HT7 receptor antagonist, on REM sleep and in major depressive disorder Journal of Pharmacology and Experimental Therapeutics DOI: 10.1124/jpet.112.193995

Wednesday, May 16, 2012

Why We Sleep, Revisted

I've got another guest post over at Discover magazine: Is the Purpose of Sleep to Let Our Brains “Defragment,” Like a Hard Drive?

It's an expanded version of two Neuroskeptic posts(1,2) about the theory that the job of slow-wave sleep is to prune connections in the brain, connections which tend to become stronger while we're awake and might become too strong without periodic resetting.

One of the commenters on the Discover post pointed out that this idea a bit like a much older idea about sleep, from Francis Crick (of discovering-the-structure-of-DNA fame). Back in 1983, Crick and Graeme Mitchison proposed that dreaming sleep serves to help us "unlearn": The Function of Dream Sleep
Their idea was a bit different, but it was really very elegant.

The sleeping brain, they said, is cut off from real sensory input, and is subject only to essentially random activity variations. However, sometimes these meaningless inputs may be 'interpreted' as having meaning, activating representations (concepts, thoughts, memories) that we've learned to recognize when awake.

These "patterns" that the brain wrongly "sees" in noise are what we experience as dreams. Crick and Mitchison's point is that, ideally, a pattern recognition system (like the brain) shouldn't be picking up patterns from random noise because that would be a sign that it was biased in favor of those patterns - "obsessed" with them, as it were, and liable to see them everywhere. So it would be good if there were some way of identifying the brain's overlearned biases and (partially) unlearning them.

That's what dreams do, somehow, according to Crick. It's like if dreams are a self-administered Rorschach test that the brain uses to work out what's "weighing on its mind"! Incidentally, this is an idea I once suggested (much less clearly) myself.

It's a beautiful and ingenious theory, although as the authors admitted, it would be very hard to test and it leaves wide open the question of how dreams could cause memories to be "unlearned" or indeed whether unlearning is even possible in the brain. It's also not very similar to the modern "defrag" theory, because Crick was talking about the dreaming rapid eye movement stage of sleep, not slow-wave sleep.

ResearchBlogging.orgFrancis Crick and Graeme Mitchison (1983). The Function of Dream Sleep Nature, 304, 111-114

Tuesday, May 15, 2012

The next four episodes of "Mann Ki Baat" to be telecast

The next four episodes of "Mann Ki Baat".... Home
1. Psychological Aspects of Accidents, specially Road Accident:    19th May, 2012 and Repeat telecast on 21st May 2012 at 8.30 a.m. DD National
2.  Coping with Disasters and Trauma:    26th May, 2012 and Repeat telecast on 28th May 2012 at 8.30 a.m. DD National
3.  Disabilities and Mental Health    :    02nd June, 2012 and Repeat telecast on 04th June 2012 at 8.30 a.m. DD National
4. Bereavement:    09nd June, 2012 and Repeat telecast on 11th June 2012 at 8.30 a.m. DD National  for details visit
Mann Ki Baat is a television program telecasted on the National Channel of Doordarshan.
The program focuses on issues related to the mind and mental health- of individuals and the society at large.  Today, we are becoming increasingly aware of the psychological havoc being wreaked by pressures in our lives due to the breakdown of social norms, increasing competitiveness, degradation of moral and ethical values and loss of faith in human relationships.

The results are there for all to see – stressed out individuals with chaotic life styles, struggling to cope, feeling helpless and alienated in their own homes, families and communities.  Many times individuals do not understand their own inner turmoil and feelings of distress.  Even if they do become aware of them, they are at a loss about who to share them with.  They may wonder if others will listen to their problems or concerns and they fear being dismissed as trivial or worse still, be ridiculed! 

 When feelings, emotions or thoughts are suppressed/bottled up inside a person over a period of time, they can lead to psychological or mental breakdown.  Sometimes, such severe or chronic stresses can become the triggering factors in the development of mental illness or psychiatric disorders that will eventually require specialized forms of treatment.

That is why it is crucial and important to recognize early signs of psychological distress and address them immediately through self-help or by seeking help outside.  Even better would be if a person talks over, shares and communicates his or her feelings, conflicts, problems or dilemmas with others around. Others- who are sympathetic listeners and caring well wishers.  This way, an emerging psychological problem can be addressed in a timely manner and resolved before it becomes a major problem.
Caring for our mind is our greatest responsibility to ourselves.  A mind that is lost, helpless, fearful and depleted is like a rudderless boat adrift with no direction and no goal in life to pursue.

Mann Ki Baat was born out of this need to create a platform, an awareness on mental health issues – both the ordinary ones that affect our daily lives and the not so ordinary issues that may not directly affect us but someone we know or care for.

We all have a story to tell, a problem that has been nagging us, feelings we have been struggling with, a conflict that has been consuming us.  We need to acknowledge them first and learn to address them in a meaningful manner, either by working with oneself or through help from outside.

Our mind is the most precious asset we posses and so also is our  Mann Ki Baat, for it is the  MIND that really MATTERS.

Saturday, May 12, 2012

Shyness By Any Other Name

People think of "social anxiety disorder" as more serious than "social phobia" - even when they refer to exactly the same thing.

Laura C . Bruce et al did a telephone survey of 806 residents of New York State. They gave people a brief description of someone who's uncomfortable in social situations and often avoids them. The question was: should they seek mental health treatment for this problem?

When the symptoms were labelled as "social anxiety disorder", 83% of people recommended treatment. But when the same description was deemed "social phobia", it dropped to 75%, a statistically significant difference.

OK, that's only an 8% gap. It's a small effect, but then the terminological difference was a small one. "Anxiety disorder" vs "Phobia" is about a subtle a distinction as I can think of actually. Imagine if one of the options had been a label that didn't imply anything pathological - "social anxiety" or "shyness". That would probably have had a much bigger impact.

This matters, especially in regards to current debates over the upcoming DSM-5 psychiatric diagnostic manual. Lots of terminological changes are planned. This study is a reminder that even small changes in wording can have an impact on how people think about mental illness. Last week I covered another recent piece of research showing that beliefs about other people's emotions affect how people rate their own mental health.

My point is: DSM-5 will not merely change how professionals talk about the mind. It will change how everyone thinks and behaves.

ResearchBlogging.orgBruce, L. (2012). Social Phobia and Social Anxiety Disorder: Effect of Disorder Name on Recommendation for Treatment American Journal of Psychiatry, 169 (5) DOI: 10.1176/appi.ajp.2012.11121808

Friday, May 11, 2012

Recovery Tips...Life-Enhancing Things

10 Life-Enhancing Things You Can Do in Ten Minutes or Less

Created Apr 17 2010 - 9:48am
It usually takes us much longer to change our moods than we’d like it to take. Here are ten things you can do in ten minutes or less that will have a positive emotional effect on you and those you love.
1.    Watch "The Last Lecture" by Randy Pausch. See it online at This is a deeply moving segment that may be the best ten minutes you've ever invested in front of a computer.
2.    Spend a little while watching the sunset with your mate. Nothing extra is necessary. Just sit and take in the natural beauty of the sky and appreciate being able to share it with the one you love.
3.    Sit quietly by yourself. It doesn't really matter where or when. Just let your feelings bubble up and then experience the thoughts flowing out of your mind. Clearing your head and heart will give you extra energy to get through the rest of the day.
4.    Write a thank you note to your mate. When was the last time you thanked your partner for just being who he or she is and being with you? Doing this in writing will give your partner something to cherish for the rest of his or her life.
5.    Take out your oldest family photo album and look through it. The experience will fill you with fond memories and perhaps make you a bit wistful for days gone by.
6.    Play with a child. Most kids have short attention spans; ten minutes of quality time from a loving adult can make their day. It will also help you stay in touch with the child inside of you.
7.    Visualize or imagine a positive outcome for any issue. Medical doctors recommend visualization to patients with chronic and potentially fatal illnesses. If it can help them, it can do the same for you. 
8.    Go to bed with the one you love ten minutes earlier than usual. Then spend that time just holding each other. Let the feeling of warmth from your mate move through you.
9.    Hang out by some water. Studies show that hospital patients who can see a natural body of water from their beds get better at a 30 percent faster rate. If you're not near the coast or a lake, try taking a bath. Doing so is also healing.
10.  Get your body moving. Shake, twist, and jump around. Let yourself feel the joy of moving to your favorite music, or just the sounds in your head. Run, walk, and bike to your hearts content. You will live longer and love it more.
Sadly, many people measure happiness by how long the experience lasts. The truth is that a few minutes of joy here and there can make a big difference in what you get out of life.

Published on Psychology Today (

Thursday, May 10, 2012

Scanning The Acidic Brain

According to University of Iowa researchers Vincent A. Magnotta and colleagues, any neuroscientist with an MRI scanner could soon be able to measure the acidity (pH) of the human brain in great detail: Detecting activity-evoked pH changes in human brain.

If it works out, it would open up a whole new dimension of neuroimaging - and might be able to answer some of the biggest questions in the field.

The method relies on measuring T1 relaxation in the rotating frame (T1ρ). Essentially, it's about the rate at which protons are swapped between water molecules and proteins. That rate is known to depend on pH.

Anyway. It certainly looks impressive. Using a standard 3 Tesla MRI scanner, they were able to image the whole brain once every 6.6 seconds - only slightly slower than conventional fMRI measurements of brain activity, where 2 or 3 seconds is more usual. The spatial resolution was comparable to fMRI.

Here's how it did on some bottles of jelly -

Then they moved onto mouse brains (the differences are smaller here)...

And finally they scanned some people. They were able to detect the (very small) pH changes caused by hyperventilation, which raises pH, and breathing air enriched in carbon dioxide, which lowers it.

Lovely pictures I'm sure you agree, and it's a very clever methodology from a technical point of view. But what will it mean for neuroscience?

Well, for one thing, it might be able to help resolve some of the debates over what conventional fMRI is actually measuring. For example, some neuroscientists believe that many (seemingly) interesting fMRI results may actually be (at least partially) reflections of subtle changes in breathing rate. Measuring acidity, an indirect proxy for breathing, could start to answer such questions.

The main question though is, what are we going to call the new method? "T1ρ MRI"... not a terribly catchy name.

Maybe MRalkalI?

ResearchBlogging.orgMagnotta, V., Heo, H., Dlouhy, B., Dahdaleh, N., Follmer, R., Thedens, D., Welsh, M., and Wemmie, J. (2012). Detecting activity-evoked pH changes in human brain Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1205902109

Wednesday, May 9, 2012

The 70,000 Thoughts Per Day Myth?

Following on from a discussion on Twitter, I've been trying to find out the origin of the strange meme that the average person has "70,000 thoughts per day".

That's a lot of thoughts. It's about 3000 per hour or 50 per minute, just under one per second.

A lot of people believe this, according to Google. Even that esteemed neuroscientist and philosopher Dr Deepak Chopra agrees, although - being a rigorous, skeptical scientist, he acknowledged some error in his measurements and said "60,000 to 80,000".

But where does this number come from?

Searching for the source, I discovered that 70k is only one such estimate. Other popular figures include 15k ; 60k ; and "12k to 50k". This last one is the only number that ever seems to come with a citation as to the source: it's attributed to "The National Science Foundation (NSF)".

This claim was made at least as far back as 2003 by a certain Charlie Greer ("Helping Plumbing, HVAC, and Electrical service contractors Sell More at Higher Profits").

But the NSF is a funding organization. Their main job is to hand out US government money to all kinds of different researchers. They don't do research as such, or at least not much, so it seems unlikely that the NSF actually said this. Perhaps they funded the research that did. But whose research? I can't find any specific sources at all.

One suggestion made on Twitter was that it could derive from Daniel Kahneman's idea that the "psychological present" is a window of about 3 seconds - everything else is either past or future.

Kahneman has in fact used NSF funding, although so have most scientists in the USA.

Now Kahneman himself said in a talk recently that there are 600k of these "psychological presents" per month, i.e. 20k per day. If you divide a day into 3 second chunks you get about 29k a day, but I guess if you assume we're asleep for a third of the day that makes 20k.

OK. I'm not sure life is really composed of neat equal chunks like that, and anyway, those are chunks of experience, not "thoughts"; but even if you ignore that, the weird thing is that very few people think we have 20k thoughts per day. 70k is far more common on Google.

Does anyone know where this number comes from?

Monday, May 7, 2012

Child Bipolar Disorder Still Rare

Bipolar disorder usually strikes between the ages of 15 and 25, and is extremely rare in preteens, according to a major study: Age at onset versus family history and clinical outcomes in 1,665 international bipolar-I disorder patients

The findings are old hat. It's long been known that manic-depression most often begins around the age of 20, give or take a few years. Onset in later life is less common while earlier onset is very unusual.

The main graph could have been lifted from any psychiatry textbooks of the last century:

The red bars are the data. Ignore the black line, that just shows an imaginary 'even' distribution over the lifespan.

Why am I blogging about these remarkably unremarkable results? Because they undermines the theory, popular in certain quarters but highly controversial, that 'child bipolar' or 'pediatric bipolar' is a major health problem.

The study confirmed that early-onset bipolar I does exist, but just 5% of the bipolar I patients had an onset before the age of 15. Assuming a lifetime prevalence of 1% for bipolar I disorder, which is about right, that makes about 0.05%, 1 in 2000 kids, about the same prevalence as Down's Syndrome. Even that's an overestimate, though, because this sample was enriched for early-onset cases: some of the participating clinics were child and adolescent only.

There's a few caveats. This was a retrospective study, that took adults diagnosed bipolar, and asked when their symptoms first appeared. It's possible that early onset cases were under-sampled, if they were less likely to survive to adulthood, or get treated. The generally milder bipolar II might also be different from the bipolar I studied here. But in general, these numbers support the traditional view that childhood bipolar is just not very prevalent.

ResearchBlogging.orgBaldessarini, R., Tondo, L., Vázquez, G., Undurraga, J., Bolzani, L., Yildiz, A., Khalsa, H., Lai, M., Lepri, B., Lolich, M., Maffei, P., Salvatore, P., Faedda, G., Vieta, E., and; Tohen, M. (2012). Age at onset versus family history and clinical outcomes in 1,665 international bipolar-I disorder patients World Psychiatry, 11 (1), 40-46 DOI: 10.1016/j.wpsyc.2012.01.006

Sunday, May 6, 2012

Politics: A Dialogue

"There's no point in voting. All the parties are the same."
"Hang on. Are you saying the Communists are the same as the conservatives? And that they're both the same as the Nazis?"
"Well, I meant all the main parties are the same."
"OK. But what's a main party?"
"A party that gets most of the votes, of course."
"Right. So by saying the main parties are the same, you're saying that most people broadly agree about politics."
"Erm... yes, but when you put it that way it seems much less fun. Anyway, I for one reject the discredited status quo and support..."
"...the Nazis."
"No! How dare you call me a Nazi?"
"You effectively are behaving as one. You're not voting against them. Which has the effect of helping them."
"That's ridiculous. I like Schindler's List as much as the next guy. I hate Nazis!"
"Just not enough to do the one thing they don't want, to vote against them."
"Erm... look, don't blame me. Blame mainstream politicians. They're the ones who've made people disillusioned. They're out of touch, and don't understand everyday people like me."
"They understand you well enough to convince you to vote for them, or at least not against them. Well enough to make you believe that they'll always be in power, that they are the natural leaders. I think they understand you all too well."
"That's not what I mean. I want people like me to be in power."
"Well, run for office then."
"Ha! I'd lose."
"Because people like you, wouldn't vote for you."
"That's crazy. Of course I'd vote for me."
"Would you? Someone like you was probably on the ballot, as an Independent or a minor party candidate, but I bet you didn't check. You just decided there was no point, because he would never win."
"And I was right!"
"Don't you see the problem? You're only right, because people like you think you're always going to be right. It's a self-fulfilling prophecy that you'll lose. That's the problem with your party!"
"My party? I'm not in a party!"
"You are, you just don't know it. A party is a group sharing common interests and beliefs. There are lots of people like you. Everyone is in a party; some of the parties are just badly organized, and don't get any votes or representatives."
"Because we don't want power!...wait...or do we?"

Saturday, May 5, 2012

More Depressed Than Average?

Whether we think of ourselves as "depressed" or "anxious" depends on what we think about other people's emotional lives, rather than our own, according to an important paper just published: Am I Abnormal? Relative Rank and Social Norm Effects in Judgments of Anxiety and Depression Symptom Severity

The work appears in the obscure Journal of Behavioural Decision Making, which is downright criminal. It deserves to be in the British Journal of Psychiatry ... and it's not often I think that about a paper.

In the first experiment, the authors quizzed people how many days per month they felt “depressed, sad, blue, tearful” or had “excessive anxiety about a number of events or activities.” They then asked them a series of questions designed to work out how they thought other people would answer than question. So they could work out where each individual thought they ranked within the general population, in terms of depression or anxiety symptoms.

Take a look. The top panel shows someone who felt depressed on 5 days a month, but believed this put him in the most depressed 70% of people. The second person felt depressed twice as often, but she thought she was below average.

They found that perceived rank was strongly correlated with whether people thought they "had depression" or "had anxiety" - much more strongly than actual frequency of symptoms. "Having depression" meant "being more depressed than other people".

That's just a correlation and doesn't prove causation, but in the second experiment, they randomly assigned people to get different versions of a survey which manipulated perceived rank, and they confirmed that rank was indeed associated with how "disabling" they felt a given level of symptoms would be.

Now, this is just common sense, in a way. Of course whether you think of yourself as abnormal will depend on what you think of as normal - that's what "abnormal" means. We understand ourselves in the context of other people.

But this common sense is maybe not so common nowadays; you can read a hundred papers about the chemistry, genetics or causes of "depression" without a consideration of what "depression" (i.e. "abnormal" as opposed to "normal" mood) is.

The implications are big. Here's my main concern. Right now a lot of people think that promoting the idea that mental illness is very common is a good idea. Their stated goal is that by 'normalizing' mental illness, we'll destigmatize it. This will both help the mentally ill to cope, and encourage people to talk about their own mental health and get help.

All very nice. I've accused such campaigns of being based on dodgy stats, but this paper suggests that such campaigns could end up having exactly the opposite effect from that intended - they could lead to under-diagnosis, and increased stigma.

Suppose being depressed or anxious becomes seen as more 'normal'. According to these data, this will make people who are depressed or anxious less likely to seek help, for any given level of symptoms. Change people's perceptions of other people, and you'll change how they see themselves.

Worse, normalizing distress could - paradoxically - make those who do seek help seem more abnormal. Think about it: if depression and anxiety are normal, surely only an abnormal person would need special help to deal with them.

It's a small step from this to the idea that mental illness is mere personal weakness, laziness, attention-seeking, or scrounging. 'What's your problem? Everyone feels down or worried sometimes... most of us just deal with it.' If everyone is mentally ill, then no-one is really mentally ill... so the "mentally ill" must have something else wrong with them. Not very nice.

I'm not sure if this has happened, or will ever happen, but it's something to think about.

ResearchBlogging.orgMelrose, K., Brown, G., and Wood, A. (2012). Am I Abnormal? Relative Rank and Social Norm Effects in Judgments of Anxiety and Depression Symptom Severity Journal of Behavioral Decision Making DOI: 10.1002/bdm.1754

Thursday, May 3, 2012

What and where to find the best drug center

Drugs and alcohol are the examples of what we need to deal with in our daily life. There have been so many people who are trapped in the shadow of alcohol and drug addiction. If you want to help these people, choosing the right drug treatment programs will be one of the least things that you can do to show your support to them. There are actually so many people who want to help people with alcohol and drug addiction problem. The problem is that they do not know what to do when they find someone who needs their help. Because of seeing this kind of problem, I am here to help you to tell you what things that you need to do if you want to help people but you do not know what to do. If you want to help people with addiction, one of the first things that you need to do is to do a quick research about the best drug rehab center that can be used to cure them.

The easiest way to find this kind of rehab center is to ask some professional association which helps you with this kind of problem. They will usually handle the situation by taking the people to go to the best drug rehab center to cure them. However, if you do not have any near professional drug association, what you have to do is to look for some best rehab center in the internet. You will be able to find so many kinds of drug center offering so many kinds of program for the addict to choose. Do not gamble on this one. If you want to really help the addict, you need to find the best drug rehab center which is really able to cure the addict by looking at the facilities and the program that they offer.

Wednesday, May 2, 2012

Spurious Positive Mapping of the Brain?

Many fMRI studies could be giving false-positive results according to an important new paper from Anders Eklund and colleagues: Does parametric fMRI analysis with SPM yield valid results?—An empirical study of 1484 rest datasets.

The authors examined the SPM8 software package, probably the most popular tool for analyzing neuroimaging data.

Their approach was beautifully simple. They wanted to check how often conventional analysis of fMRI would "find" a signal when there wasn't really anything happening. So they took data from nearly 1,500 people who were scanned when they were just resting, and saw what would happen if you looked for "task related" activations in those scans, even though there was in fact no task. It's a very clever use of the resting state data.

Eklund et al ran the analysis many thousands of times, under various different conditions. This is the key finding:

This shows the proportion of analyses which produced significant "activations" associated with various different "tasks". In theory, the false positive rate should be way down at the bottom at 5% in each case. That's the error rate they told SPM8 to provide. As you can see, it was often much higher. Oh dear.

The error rate depended on two main things. Most important was the task design. Block designs were much worse than event-related designs (see the labels at the bottom: B1,2,3,4 are block, E1,2,3,4 are event.) The longer the blocks, the more errors. B4, the most error-ridden design of all, corresponds to 30 second blocks.

That's bad news because that's a very common design.

Secondly, the repeat time (TR) mattered, especially for block designs. The TR is how long it takes to scan the whole brain once. The longer the TR, the better, the data showed: 1 second TRs are really dodgy. Luckily, they are rarely used. 2 seconds is OK for most event-related designs, but block designs really suffer. 3 seconds is even better.

Because most fMRI studies today use 2-3 second TRs, this is somewhat reassuring, but for block design B4 the error rate was still up to 30% even with TR=3. Oh dear, oh dear.

So what went wrong? It's complicated, and you should read the paper, but in a nutshell the problem is that fMRI data analysis assumes that there are only two sources of data: the real brain activation signal, and white noise. The key assumption is that it's white noise, which essentially means that it is random at any moment in time: knowing about what the noise did in the past tells you nothing about what it will do in the future. "Random" noise that's actually correlated with itself over time is not white noise.

Now noise in the brain is certainly not white, for various reasons, including the effects of breathing and heart rate (which of course are cyclical, not random.) All fMRI analysis packages try to correct for this - but Eklund et al have shown that SPM8's approach doesn't manage to do that, at least for many designs.

What about rival fMRI software like FSL or BrainVoyager? We don't know. They use different approaches to noise modelling, which might mean they do better, but maybe not.

And the really big question: does this mean we can't trust published SPM8 results? Does SPM stand for Spurious Positive Mapping? Well, that's also not clear. All of Eklund et al's analyses were based on single subject data. But most fMRI studies pool the results from more like 20 or 30 subjects. Averaging over many subjects might make the false positives cancel out, but we don't yet know if that would solve the problem or only lessen it.

ResearchBlogging.orgEklund, A., Andersson, M., Josephson, C., Johannesson, M., and Knutsson, H. (2012). Does parametric fMRI analysis with SPM yield valid results?—An empirical study of 1484 rest datasets NeuroImage DOI: 10.1016/j.neuroimage.2012.03.093