Tuesday, August 30, 2011

On Antipsychiatry

So leading US psychiatrist Stephen Stahl is annoyed at Daniel Carlat (of the The Carlat Psychiatry Blog and many other publications.)

After first surveying the current outlook for the development of new psychiatric drugs - not good, with many companies pulling out - Stahl laments:

Undoubtedly this is to the great delight of the anti-psychiatry community, lights up the antipsychiatry blogs (e.g., Carlat, http://carlatpsychiatry.blogspot.com/), who attract the Pharmascolds, scientologists and antimedication crowd who believe either there is no such thing as mental illness, that medication should not be used, or both.



Did you know that psychiatric illnesses are pure inventions of Pharma and their experts to treat patients that do not exist with drugs that are dangerous and do not work with the purpose only of profiting themselves? Stop the profits! Make mental illness go away by legislation and committee!


Stahl ends with the warning: Be careful what you ask for. You might just get it - "it" being an end to drug development in psychiatry.



Well, I would say the same to him.



Stahl paints opponents of modern pharmaceutical industry behaviour as "antipsychiatrists". They're not. Well, he only names one of them, Daniel Carlat, and he's certainly not. Carlat edits the Carlat Psychiatry Report. Let's take a look at the latest issue:



Benzodiazepines: A Guide to Safe Prescribing - discusses benzodiazepines, including a helpful table of their doses and half-lives. Useful to someone planning to prescribe these drugs, that is, which not many anti-psychiatrists would. Says that "They work quickly and effectively for anxiety and agitation...In most cases benzodiazepines have a benign side-effect profile..." Hardly likely to please the antimedication crowd.



Update on Medications for PTSD - including a review of trials of antidepressants, antipsychotics, and more exotic drugs. Says that psychotherapy is the key to treating PTSD, but that medication can be helpful: "Getting some comfort from meds can often enable a patient to more easily face" the hard task of therapy. Not enormously pro-medication, but very far from being anti.



Combined Antidepressants No More Effective Than Monotherapy - discusses a recent study finding that starting depressed patients on a combo of two antidepressants offers no benefits over just one drug. So, the piece concludes, "We recommend never using antidepressants, and banning them all forever"... no wait, that's what it would have said if Stahl were right. It actually said "we recommend...starting with a single antidepressant". Not none.



Overall Carlat is, as far as I can see, really pretty moderate. Yes, he's been critical of certain drugs, of Pharma-influenced psychiatrists and the culture of giving doctors freebies to promote products. Nonetheless, he believes that mental illness exists, and he thinks that medication can be useful in treating it.



Maybe Stahl's right and Carlat leads a secret double life as a Scientologist. Maybe he is the reincarnation of R. D. Laing, or Thomas Szasz in a rubber mask. If not, though, branding him an antipsychiatrist shows that Stahl is unclear on what "psychiatry" is.



Psychiatry means the diagnosis and treatment of mental illness. Carlat, and indeed many other like-minded critics, are trying to improve that process by encouraging correct diagnosis and appropriate treatment.



When Carlat criticizes, say, the psychiatry textbook that turned out to have been written with "help" from a drug company, he's doing, I assume, because, as a psychiatrist who cares about psychiatry, he doesn't like seeing his field corrupted by propaganda.



This is why Stahl should heed his own warning: Be careful what you ask for.



Because Stahl seems to be asking for all the opponents of the excesses of the modern pharmaceutical industry to be opponents of psychiatry itself. At the moment, they're not. There are many, psychiatrists and others, who are trying to improve psychiatry, by protecting it from what they see as negative influences.



Maybe they're wrong about which influences are negative, maybe Pharma has had a more positive impact than they think, but even if they're wrong, they're not anti-psychiatry, they're pro-psychiatry.



However, if Stahl succeeds in painting all of these people as outside the psychiatric mainstream, he might find that psychiatry, stripped of such voices of sanity, turns into something so crazy that true antipsychiatry becomes the only reasonable option.

Sunday, August 28, 2011

Confused

What is confusion?





According to Collins English Dictionary, the main meaning of the word "confused" is:

confused [kənˈfjuːzd] adj
1. feeling or exhibiting an inability to understand; bewildered; perplexed
That sounds about right. But hang on. Isn't there something odd about this: "feeling or exhibiting an inability to understand..."?



Those are two completely different things. Sometimes people exhibit a lack of understanding and don't feel it - they think they understand, but actually they don't. Indeed, that's the worst kind of confusion, because it leads to people making mistakes based on wrong assumptions. Whereas feeling confused is much less of a problem. If you know you're confused, you won't go around acting as if you're not.



The feeling of confusion happens when you've just avoided being confused, or just come out of it. Confusion is a feeling, and also, a status, and the two are not just separate but (to some extent) mutually exclusive. If you feel confused, you can't actually be seriously confused.



Yet we use the same word for both, and the dictionary treats them both as being not just the same but part of the same definition. Confusing.



Or take being drunk. "Drunk" is a feeling, certainly. It's also a state, and they only sometimes go together. You can be drunker than you feel, with hilarious or tragic consequences. Everyone knows that you can't trust a drunken person to know how drunk they are.



Consider "depression". Depression is a feeling. No question about that. We've all felt at least a little depressed. Depression is also a state, that certain people go into as a result of mental illness, physical illness or as a side effect of certain drugs.



But the state of depression is no more equivalent to the feeling of depression than being confused means feeling confused. In my experience of depression, feeling depressed is a sign that I'm only slightly depressed. When I'm really depressed, I don't think I'm depressed at all.



This is one of the most insidious things about depression: it 'creeps up on you'. Over a period of time - usually several days, in my case, but it can be much longer or shorter - your mind changes.



You stop noticing opportunities, and become obsessed with risks.
Your ability to take decisions and come up with ideas withers and your imagination fails you. Your thoughts get stuck in loops. You feel weary doing the things you used to enjoy and angry around people you used to like.



In other words, your mind changes. Your memory, thinking and perceptions are all altered - but you don't notice that. You notice the effects, of course, but you think they're outside: you think the world has suddenly become less friendly. A classic case of confusion, in the worst sense.

Friday, August 26, 2011

Drug Treatment ? unlimited possibilities

When the Drug treatment center leaves a trail of broken promises, the drug addicts face terrible sufferings, so as his family members. When the center conforms to the standards, the addicts become immensely beneficial. How can a common individual keep track of such reports? It is much easier to come to somebody like Luxury Drug Rehab, who has created churning inside the concerned realm by showing bold faces in varied events.


It is unfortunate that no system has been formulated as yet to initiate a proactive role to find out the authenticity of the Drug treatment centers. Once a victim is admitted, then only he can realize the true impact. As life is not a place, where you can experiment on every crucial events, the effects of the drug rehab treatment casts a reasonable spell on the addict and the effect may remain for longer period.

Some of the people even remain mum because of various social obligations. However Luxury Drug Rehab never minds to promote or discard the drug rehab centers through their periodical vigil and stringent criteria.

When somebody pinpoints loopholes of any Alcohol rehab program, one can presume that he has been badly affected by their treatment procedure. It must be noted that such unauthenticated treatment programs can bring formidable disasters in the lives of the alcohol or drug addict. Whether it is social obligation or the stake of life, people have found to raise their voice because of some expediency. When Luxury Drug Rehab evaluates a center, it records all such experiences besides its own set of criteria.


About the Author:


This article is written by Dr.Naina

Thursday, August 25, 2011

New Mutations - New Eugenics?

True or false: you inherit your genes from your parents.





Mostly true, but not quite. In theory, you do indeed get half of your DNA from your mother and half from your father; but in practice, there's sometimes a third parent as well, random chance. Genes don't always get transmitted as they should: mutations occur.



As a result, it's not true that "genetic" always implies "inherited". A disease, for example, could be entirely genetic, and almost never inherited. Down's syndrome is the textbook example, but it's something of a special case and until recently, it was widely assumed that most disease risk genes were inherited.



Yet recent evidence suggests that many cases of neurological and psychiatric disorders are caused by uninherited, de novo mutation events. Here are two papers from the last few weeks about schizophrenia(1,2) - but the story looks similar for autism, intellectual disabilities, some forms of epilepsy, ADHD, and others. Indeed they're often the same mutations.



Biologically, a given mutation is what it is, whether it's de novo or inherited. But on a social and a psychological level, I think there are crucial differences, and in particular I think that if it turns out that de novo mutations are important in disease, we're going to see attempts to take these variants out of circulation - far more so than in the case of the very same genes, were they inherited.



The old eugenics movement was based on the idea that if we stop people with bad genes from breeding - by sterilization, voluntary or otherwise, say - we'll be able to eliminate diseases and other undesirable traits. This idea is now generally regarded as extremely unethical, but many of its opponents have shared with the eugenicists the belief that it could work.



But if de novo mutations are what cause the majority of disease, then this approach would be pointless. Sterilizing certain people, or encouraging the healthy ones to have more children, would never be able to eliminate the 'bad genes' because new ones are being created every generation, pretty much at random.



So the de novo paradigm ought to be welcomed by opponents of eugenics. It wasn't just morally wrong - it was biologically misguided too.



But hang on. This is the 21st century. We have in vitro fertilization (IVF), and you can analyze the genes of an IVF embryo before you decide to make it into a child. In the near future, we might be able to routinely sequence the genome of any unborn child shortly after conception.



From there, it would be a small step to allowing parents to decide not to have children with de novo mutations.



This would be, in its effects, a form of eugenics - in the sense that it would produce the effect that the old eugenicists wanted. No more 'bad' genes, or not nearly as many. Opinions will differ as to whether it's morally different. But I would have said that politically, it's a lot more likely to happen.



I can't see forced sterilization returning any time soon. But if you were expecting a baby and you knew that it was not just carrying your and your partner's DNA, but had also suffered a mutation - might you not want to avoid that?



Psychologically, it matters that it did not inherit the gene. It would be a big step to decide that your child should not inherit one of your own genes. Of course, some genes are obviously harmful, like one that raises the risk of cancer, but think about the grey areas - a gene for social anxiety, mild autistic symptoms, obesity, a personality trait.



You might well feel that carrying that gene is what makes you, you; and so it would be natural for your child to have it. You might decide that if it was good enough for you (and all your ancestors), it's good enough for your children. You might well resent the very idea that it's a 'bad' gene at all, as an attack on your own self-worth.



But none of that applies if it's a de novo mutation. Indeed, quite the opposite - all those same considerations would probably lead you to want your children to carry as close as possible to a carbon copy of your DNA, with no random changes. It was good enough for you.



My point is that I think there will be much more support for the idea of genetic screening against de novo mutations than against inherited genes. More people will want it, it will be more socially acceptable, and more widely used. I'm not saying this would be a good or a bad thing, just making a prediction. In the future, diseases and traits that are primarily caused by de novo mutations will increasingly selected against.

Sunday, August 21, 2011

Is Sleep Brain Defragmentation?

After a period of heavy use, hard disks tend to get 'fragmented'. Data gets written all over random parts of the disk, and it gets inefficient to keep track of it all.





That's why you need to run a defragmentation program occasionally. Ideally, you do this overnight, while you're asleep, so it doesn't stop you from using the computer.



A new paper from some Stanford neuroscientists argues that the function of sleep is to reorganize neural connections - a bit like a disk defrag for the brain - although it's also a bit like compressing files to make more room, and a bit like a system reset: Synaptic plasticity in sleep: learning, homeostasis and disease



The basic idea is simple. While you're awake, you're having experiences, and your brain is forming memories. Memory formation involves a process called long-term potentiation (LTP) which is essentially the strengthening of synaptic connections between nerve cells.



Yet if LTP is strengthening synapses, and we're learning all our lives, wouldn't the synapses eventually hit a limit? Couldn't they max out, so that they could never get any stronger?



Worse, the synapses that strengthen during memory are primarily glutamate synapses - and these are dangerous. Glutamate is a common neurotransmitter, and it's even a flavouring, but it's also a toxin.



Too much glutamate damages the very cells that receive the messages. Rather like how sound is useful for communication, but stand next to a pneumatic drill for an hour, and you'll go deaf.



So, if our brains were constantly forming stronger glutamate synapses, we might eventually run into serious problems. This is why we sleep, according to the new paper. Indeed, sleep deprivation is harmful to health, and this theory would explain why.





The authors argue that during deep, dreamless slow-wave sleep (SWS), the brain is essentially removing the "extra" synaptic strength formed during the previous day. But it does so in a way that preserves the memories. A bit like how defragmentation reorganizes the hard disk to increase efficiency, without losing data.



One possible mechanism is 'synaptic scaling'. When some of the inputs onto a given cell become stronger, all of the synapses on that cell could weaken. This would preserve the relative strength of the different inputs while keeping the total inputs constant. It's known that synaptic scaling happens in the brain, although it's not clear whether it has anything to do with sleep.



There are other theories of the restorative function of sleep, but this one seems pretty plausible. It stands in contrast to the idea that sleep is purely a form of inactivity designed to save energy, rather than being important in itself.



What this paper doesn't explain, and doesn't try to, is dreaming, REM sleep, which is very different to slow-wave sleep. REM is not required for life, so long as you get SWS, and some animals don't have REM, but they all have SWS, although in some animals, only one side of the brain has it at a time.



So it makes sense, but what's the evidence? There's quite a bit - but, it all comes from very simple animals, like flies and fish.



The pictures above show that, in various parts of the brain of the fruit fly, measures of synaptic strength are increased in flies that have been awake for some time, compared to recently rested ones. In general, synapses increase during the wake cycle and then return to baseline during sleep.



There's similar evidence from fish. But the authors admit that no-one has yet shown that the same is true of any mammals - let alone humans.



I'd say that this is important, because the fly brain is literally a million times smaller than ours. Synaptic overgrowth could be a more serious problem for a fly because they just have fewer neurons to play with. Sleep may have evolved to prune extra connections in primitive brains, and then shifted to playing a very different role in ours.



ResearchBlogging.orgWang G, Grone B, Colas D, Appelbaum L, & Mourrain P (2011). Synaptic plasticity in sleep: learning, homeostasis and disease. Trends in Neurosciences PMID: 21840068

Friday, August 19, 2011

The Ethics of Forgetfulness Drugs

Drugs that could modify or erase memories could soon be possible. We shouldn't rush to judge them unethical, says a Nature opinion piece by Adam Kolber, of the Neuroethics & Law Blog.



The idea of a pill that could make you forget something, or that could modify the emotional charge of a past experience, does seem rather disturbing.



Yet experiments on animals have gone a long to revealing the molecular mechanisms behind the formation and maintanence of memory traces. Much of the early work focussed on dangerously toxic drugs but recently more targeted approaches have appeared.



Kolber argues that we should not shy away from research in this area or brand the whole idea unethical. Rather we should consider the costs and benefits on a case-by-case basis.

The fears about pharmaceutical memory manipulation are overblown. Thoughtful regulation may some day be appropriate but excessive hand-wringing now over the ethics of tampering with memory could stall research into preventing post-traumatic stress in millions of people. Delay could also hinder people who are already debilitated by harrowing memories from being offered the best hope yet of reclaiming their lives.
He says that

Given the close connection between memory and a sense of self, some bioethicists...worry that giving people too much power to alter their life stories could ultimately weaken their sense of identity and make their lives less genuine.



These arguments are not persuasive. Some memories, such as those of rescue workers who clean up scenes of mass destruction, may have no redeeming value. Drugs may speed up the healing process more effectively than counselling, arguably making patients more true to themselves than they would be if a traumatic experience were to dominate their lives.
This is a complex issue. I can see his point, although I'm not sure the rescue worker example is the best one. A rescue worker, at least a professional one, has chosen to do that kind of work. The experiences that are part of that job are ones they decided to have - or at least that they knew were a realistic possibility - and that may be an expression of their identity.



The argument is perhaps more convincing in the case of someone who, quite unexpectedly, suffers an out-of-the-blue trauma. In this case, the trauma has nothing to do with their lives; if it interferes with their ability to function, it might "stop them from being themselves".



Kolber ends by quoting a fascinating story from Time magazine in 2007, which I didn't catch at the time:

Take a scenario recounted by a US doctor in 2007 (ref. 9). The doctor had biopsied a suspected cancer patient and sent a tissue sample to a pathologist while the woman was still in the operating room. Thinking she was completely sedated, the pathologist announced a bleak prognosis over the intercom.



The patient, who had received only local anaesthesia, heard the news and began to shriek, “Oh my God. My kids!” An anaesthesiologist standing by quickly injected her with propofol, a sedative that causes some people to forget what happened a few minutes before they were injected.



When the woman woke up, she had no memory of hearing her prognosis.
ResearchBlogging.orgKolber A (2011). Neuroethics: Give memory-altering drugs a chance. Nature, 476 (7360), 275-6 PMID: 21850084

Wednesday, August 17, 2011

Abusing Drugs in the Workplace

In only the late nineteen nineties it was projected that over two and a half million individuals would abuse drugs or alcohol in the workplace after the year 2000  This projection developed due to research conducted on the current addiction status of our citizens as well as the increase in drug and alcohol abuse that has been rising with each passing year.

Also in research conducted it has been shown that over two hundred and fifty billion dollars have been lost in the workplace each year since nineteen hundred and ninety-five due to alcohol or drug abuse at work.  This accounts for poor performance leading to loss of business and/or sales, accidents, absenteeism/tardiness and crime within work environments.

Did you know that over thirty percent of all hospital calls during the work day in America are due to an accident involving drugs and or alcohol in the work environment?  This number is only rising, and large corporations are starting to crack down.  Some employers are requesting drug tests prior to hiring, with ongoing drug tests on a monthly basis.  Others may test for drugs and alcohol immediately after an accident has occurred within an employer’s work environment.  So what does this mean for addicts in the workplace?



At the present time over seventy percent of drug or alcohol addicts are employed either part or full time.  These addicts range from eighteen to fifty years of age.  Over fifty percent of these employees are Caucasian or African-American males.  For these employees, there may not initially be an issue with their employment.  It may take a few years for every company or corporation to jump ‘on board’ with respects to regular drug and alcohol level testing.  Although when this does occur it may mean that these individuals will begin to lose their jobs; therein, losing their salaries, and often times the ability to find another job.

When drug and alcohol abuse is a reason for firing, your past employers may be able to state this information to employers looking to hire you in the future.  In this economy losing a job, and not being able to find another can be financially detrimental.  This could entail not being able to pay a home mortgage, and therefore losing a house.  It also may mean not being able to pay for gas to travel to job interviews as well as being late on utility bill payments.

This can not only affect one’s credit tremendously, but the financial future of an individual’s family all together.  If you or someone you know is abusing drugs and/or alcohol in the workplace, it is best to seek professional assistance from a drug and alcohol rehabilitation center.  Fix the problem now, before it creates a heap of problems in your immediate or distance future.

Pharmaceutical Company Threatens Blogger

Boiron, a multinational pharmaceutical company, have threatened an Italian blogger with legal action, the BMJ reports.



Many people are concerned when big pharmaceutical companies do this kind of thing. So I don't think we should make any exception merely because Boiron's pharmaceuticals happen to be homeopathic ones.



Samuel Riva, who blogs (in Italian) at blogzero.it, put up some articles critical of homeopathy

which included pictures of Boiron’s blockbuster homoeopathic product Oscillococcinum, marketed as a remedy against flu symptoms. The pictures were accompanied by captions, which joked about the total absence of any active molecules in homoeopathic preparations
Boiron wrote to Riva's internet provider threatening legal action, if the offending references to Boiron weren't taken down. They also wanted them to lock Riva out of his blog, the BMJ says. In response Riva removed the references to Boiron, including the pictures and captions, but kept the posts on homeopathy in general.



Hmmm.



Above you can see a new picture I made of a Boiron product, with some captions you may find interesting. I've made sure to limit these to quotes from Wikipedia, and from Boiron USA's own website, and some simple mathematical calculations.



Beyond that, I make no comment whatsoever.



ResearchBlogging.orgTurone F (2011). Homoeopathy multinational Boiron threatens amateur Italian blogger. BMJ (Clinical research ed.), 343 PMID: 21840920

Monday, August 15, 2011

A Ghostwriter Speaks

PLoS ONE offers the confessions of a former medical ghostwriter: Being the Ghost in the Machine.





The article (which is open access and short, so well worth a read) explains how Linda Logdberg became a medical writer; what excited her about the job; what she actually did; and what made her eventually give it up.



Ghostwriting of course has a bad press at the moment and it's recently been banned by some leading research centres. Ghostwriting certainly is concerning, because of what it implies about the process leading up the publication.



However, it doesn't create bad science. A bad paper is bad because of what it says, not because of who (ghost)wrote it. Real scientists can write bad papers without a ghostwriter's help.



When pharmaceutical companies pay a ghostwriter, they are not doing this to get access to special dark arts that real scientists are innocent of. As far as I can see, it's just more efficient to use a specialist writer to do your scientific sins, when you're doing it all the time.



Rather like every evil sorcerer has an apprentice to do the day-to-day work of sacrificing animals and mixing potions.



Logdberg says:

My career came to an end over a job involving revising a manuscript supporting the use of a drug for attention deficit-hyperactivity disorder (ADHD), with a duration of action that fell between that of shorter- and longer-acting formulations.



However, I have two children with ADHD, and I failed to see the benefit of a drug that would wear off right at suppertime, rather than a few hours before or a few hours after. Suppertime is a time in ADHD households when tempers and homework arguments are often at their worst.



...Attempts to discuss my misgivings with the [medical] contact met with the curt admonition to ‘‘just write it.’’ But perhaps because this particular disorder was so close to home, I was unwilling to turn this ugly duckling of a ‘‘me-too’’ drug into a marketable swan.
Many scientists will recall being in that kind of situation, albeit in a different context.



When writing a grant application, for example, you are almost literally trying to sell your proposed research to the awarding committee, on several levels. You need to sell the importance of the scientific question; the likely practical benefits of the research; the chance of success using your methods; what makes you the right person to do this work, and so on.



Writing a paper is much the same, although in this case you're selling research you've already done, and the data you collected.



Turning ugly ducklings into fundable, or publishable, swans, is part and parcel of modern science. Of course, the ducklings are not always as ugly as in the case Logdberg describes, but they are rarely as beautiful as they eventually end up.



ResearchBlogging.orgLogdberg, L. (2011). Being the Ghost in the Machine: A Medical Ghostwriter's Personal View PLoS Medicine, 8 (8) DOI: 10.1371/journal.pmed.1001071

Friday, August 12, 2011

Debating Greenfield



British neuroscientist Susan Greenfield regrets the recent controversy over certain of her remarks, and calls for a serious debate over "mind change" -

"Mind change" is an appropriately neutral, umbrella concept encompassing the diverse issues of whether and how modern technologies may be changing the functional state of the human brain, both for good and bad.
Very well, here goes. I wonder if Greenfield will reply.



As Greenfield points out, the human brain is plastic and interacts with the environment. Indeed, this is how we are able to learn and adapt to anything. Were our brains entirely unresponsive to what happens to them we would have no memory and probably no behaviour at all.



The modern world is changing your brain, in other words.



However, the same is true of every other era. The Victorian era, the Roman Empire, the invention of agriculture - human brains were never the same after those came along.



Because the brain is where behaviour happens, any change in behaviour must be accompanied by a change in the brain. By talking about how behaviour changes, we will, implicitly, also be discussing the brain.



However it doesn't work in reverse. Changes in the brain can't be assumed to mean changes in behaviour. Greenfield cites, for example, this paper which purports to show reductions in the grey matter volume of certain areas of the brain cortex in Chinese students with internet addiction compared to those without.



The obvious comment here is that it doesn't prove causality, as it is only a correlation. Maybe the reason they got addicted was because they already had these brain changes.



However, there is a more subtle point. Even if these were a direct consequence of excessive internet use, it wouldn't mean that the internet use was changing behaviour.



We have no idea what a slight decrease in grey matter volume in the cerebellum, dorsolateral prefrontal cortex, and supplementary motor area would do to cognition and behaviour. It might not do anything.



My point here is that rather than worrying about the brain, we ought to focus on behaviour. Because that is also focussing on the brain, but it's focussing on the aspects of brain function that actually matter.



Greenfield then poses three questions.

1. Could sustained and often obsessive game-playing, in which actions have no consequences, enhance recklessness in real life?
It's possible that it could, although I don't think we do live in an especially reckless society, given that crime rates are lower now than they have been for 20 years.



However, the question assumes that game playing has no consequences. Yet in-game actions do have in-game consequences. To a non-gamer, these may seem like no consequences, because they're not real.



Yet in the game, they're perfectly real, and if you spend 12 hours a day playing that game, and all your friends do as well - you are going to care about that. Those consequences will matter, to you, and with luck, you'll learn not to be so impulsive in the future.



In World of Warcraft, for example, actions have all too many consequences. If you impulsively decide to attack an enemy in the middle of a raid, you could cause a wipe, which would, quite possibly, ruin everyone's evening and get you a reputation as an oaf.



Exactly as your reputation would suffer if you and your friends went for an evening at the opera, and you stood up in the middle and shouted a profanity. Ah, but that's real life, the response goes. Is it? Is a performance in which hundreds of people sit solemnly, while grown adults dress up and pretend to be singing gods and fairies on the instructions of a deceased anti-semite, any more real than this?

3. How can young people develop empathy if they conduct relationships via a medium which does not allow them the opportunity to gain full experience of eye contact, interpret voice tone or body language, and learn how and when to give and receive hugs?
I do not think that this accurately represents the experience of most children today. However, assuming that it were true, what would be the problem?



If everyone's relationships were conducted online, surely it would be more important to learn how to navigate the online world, than it would be to learn how to interpret body language, which (webcams aside), you would never see, or need to see.



If the brain is plastic and adapts to the environment, as Greenfield argues, then surely the fact that it is adapting to the information age is neither surprising nor concerning. If anything, we ought to be trying to help the process along, to make ourselves better adapted. It would be more worrying if it didn't adapt.



Some might be concerned by this. Surely, there is value in the old way of doing things, value that would be lost in the new era. Unless one can point to definite reasons why the new state of affairs is inherently worse than the old - not just different from it - it is hard to distinguish these concerns from the simple feeling of nostalgia over the past.



The same point could have equally well been made at any time in history. When our ancestors first settled down to farm crops, an early conservative might have lamented - "Young people today are growing up with no idea of how to stab a mammoth in the eye with a spear. All they know is how to plant, water and raise this new-fangled 'wheat'."

Thursday, August 11, 2011

Do We Need Placebos?

A news feature in Nature asks whether placebo controls are always a good idea: Why Fake It?



The piece looks at experimental neurosurgical treatments for Parkinson's, such as "Spheramine". This consists of cultured human cells, which are implanted directly into the brain of the sufferer. The idea is that the cells will grow and help produce dopamine, which is deficient in Parkinson's.



Peggy Willocks, a 44 year old teacher, took part in a trial of the surgery in 2000. She says it helped stave off the symptoms for years, but the development of Spheramine was axed in 2008 after a controlled trial found it didn't work any better than a placebo.



The placebo was "sham surgery" i.e. putting the patient through a full surgical procedure, and making holes in their skull, but without doing anything to their brain.



It's cheap and easy to do a placebo controlled trial of a drug - all you need is a sugar pill. But with neurosurgery, it's clearly a lot more involved. A placebo has to be believable. Convincing sham surgery is expensive, time-consuming, and it has real risks, albeit small ones.



Is it ethical to put patients through that?



That, I think, can only be decided on a trial-by-trial basis. It depends on the likely benefits of the treatment, and whether the trial is scientifically sound. Obviously, it'd be wrong to do sham surgery as part of a flawed trial that won't tell us anything useful.



The Nature article, however, goes further than this, and suggests that placebo controlled trials may be unsuitable for testing these kinds of treatments, failing to detect a real benefit in some patients:

There are hints from some of the failed phase II trials that patients followed up beyond study endpoints might tell a more positive story. Some say, therefore, that sham controls are sinking the prospects of valuable drugs.



Anders Björklund, a neuroscientist at Lund University in Sweden who is collaborating with [Roger Barker of Cambridge], says that sham surgery can lead researchers to throw out a strategy prematurely if the trial fails because of technical or methodological glitches rather than a true lack of efficacy.
A patient advocate agrees:

According to Perry Cohen, who leads a network of patient activists called the Parkinson Pipeline Project, that’s exactly what is happening. He had always questioned the need for sham surgery, he says, but after the string of phase II failures, “We started saying, ‘Hey, this is a problem. These trials failed, but we know they are working for some people.’”
...Cohen [says] that patients have different priorities and that researchers must take these into account. Researchers use placebo controls to weed out false positives. But for patients, the real ogre is the false negatives — which can sink a therapy before it has been optimized.
I'm not sure about this. If I had Parkinson's, I would certainly hate to miss out on the genuine cure because a trial had failed to recognize that it worked. But equally, I would not be happy to be given a rubbish treatment that would have failed a placebo controlled trial, but never got one, because of arguments like this.



Placebo controlled trials can fail to detect benefits if they are too short, too small, methodologically flawed, or whatever. Certainly, a trial can be placebo controlled, and still crap. But the answer is surely to do better trials, not no trials.



It may well be that we shouldn't rush to do placebo controlled trials until later in the development process, when the technique has been properly refined. But the history of medicine is littered with treatments that "we know work for some people" - that didn't.



ResearchBlogging.orgKatsnelson, A. (2011). Experimental therapies for Parkinson's disease: Why fake it? Nature, 476 (7359), 142-144 DOI: 10.1038/476142a

Wednesday, August 10, 2011

Issues of Intravenous Drug Abuse

The major risks of intravenous drug use results from unsanitary conditions. Dirty syringes and paraphernalia cause blood borne diseases and infections. Bacteria are the cause of abscesses at injection sites. They can lead to sepsis and limb amputation if left untreated. Sepsis is a serious complication, which results in death unless the user is administered intravenous antibiotics immediately. Infection can spread from the original injection site to include the whole arm or leg, resulting in emergency amputation of the limb. Injecting drugs in the leg can cause circulation problems, which also can lead to amputation.


Injection of heroin is dangerous due to the bacteria found in the drug. It is produced in unsanitary conditions and adulterated with anything from talcum powder to milk sugar. Endocarditis is common among heroin abusers. Bacteria from the drug infect the lining of the heart.

Liver disease such as Contracting hepatitis and cirrhosis is a common problem. Repeated use damages the liver, scars form that develops into cirrhosis. Hepatitis B and C are viruses that affect the liver. Hepatitis can become chronic and develop into serious complications such as cirrhosis and liver cancer. There is a high risk of overdose when injecting heroin. The potency varies and the user has no way of knowing how strong the drug is.

Meth poses a danger for different reasons than heroin. Injecting meth is a danger due to the systemic effect the drug produces. A sudden surge of meth into the body, such as when injected, causes a spike in blood pressure. A sudden rise in blood pressure can lead to strokes and heart attacks. This affect poses a higher risk in someone with pre-existing hypertension or cardiac problems. Abscesses and infection also occur with meth injection, although not as often as with heroin.


Intravenous drug use, considered the worst form of abuse, is a problem. Drug use, no matter how innocent it seems, needs addressing. Any drug can lead to intravenous use. The key is to resolve the problem while it is small, before it becomes full-blown addiction.

Monday, August 8, 2011

Susan Greenfield Causes Autism

British neuroscientist Susan Greenfield has caused a storm with her suggestion that the recent rise in the use of the internet and social media may be related to the recent rise in autism.

I point to the increase in autism and I point to internet use. That's all. Establishing a causal relationship is very hard but there are trends out there that we must think about.


This has led to fellow Oxford neuroscientist Dorothy Bishop of BishopBlog writing an Open Letter asking her to "please, please, stop talking about autism". Twitter has been enlivened by #greenfieldism's such as "I point to the rise of Rebecca Black and the Greek sovereign debt crisis, that is all."



However, in a Neuroskeptic exclusive, I can reveal that the situation is far worse than anyone feared. Greenfield is not merely spreading unwarranted speculations about the recent rise in autism diagnoses.



She caused that rise.



The graph above shows the total number of scientific citations for Susan Greenfield's papers, over time. This is as good a measure as any of the influence Greenfield has had over our culture.



The trend is obvious, the growth is dramatic, and the correlation with the modern autism epidemic is undeniable.

So Apparantly I'm Bipolar

According to a new paper, yours truly is bipolar.


I've written before of my experience of depression, and the fact that I take antidepressants, but I've never been diagnosed with bipolar.

I've taken a few drugs in my time. On certain dopamine-based drugs I got euphoric, filled with energy, talkative, confident, with no need for sleep, and a boundless desire to do stuff, which is textbook hypomania. So I think I know what it feels like, and I can confidently say that it has never happened to me out of the blue.

On antidepressants, I have had some mild experiences of this type. Ironically, the closest I've come to it was when I quit an SSRI antidepressant. I've also experienced periods of irritability and agitation on antidepressants. Either way, that's antidepressants. Bipolar is when you get high on your own supply of neurotransmitters.

Well, it used to be. Jules Angst et al have got some new, broader criteria for "bipolarity" in depression. They say that manic symptoms in response to antidepressants do count, exactly like out-of-the-blue mania.

What's more, under the new "Bipolar Specifier" criteria, there's no minimum duration. Under existing criteria the symptoms have to last 4 or 7 days, depending on severity. Under the new regime if you've ever been irritable, high, agitated or hyperactive, on antidepressants or not, you meet "Bipolar Specifier" criteria, so long as it was marked enough that someone else noticed it.

All you need is:
an episode of elevated mood, an episode of irritable mood, or an episode of increased activity with at least 3 of the symptoms listed under Criterion B of the DSM-IV-TR associated with at least 1 of the 3 following consequences: (1) unequivocal and observable change in functioning uncharacteristic of the person’s usual behavior, (2) marked impairment in social or occupational functioning observable by others, or (3) requiring hospitalization or outpatient treatment.
The bipolar net just got bigger. And they caught me in it. Me and 47% of depressed people in their study. They recruited 509 psychiatrists from around the world, and got each of them to assess between 10 and 20 consecutive adult depressed patients who were referred to them for evaluation or treatment. A total of 5635 patients were included.

Only 16% met existing DSM-IV criteria for bipolar disorder, so the new system with 47% identified an "extra" 31%, trebling the number of bipolar cases.

A cynic would say that this is a breathtaking piece of psychiatric marketing. You give people antidepressants, then you diagnose them with bipolar on the basis of their reaction to those drugs, thus justifying selling them yet more drugs.

The cynic would not be surprised to learn that this study was sponsored by pharmaceutical company Sanofi.
All investigators recruited received fees, on a per patient basis, from sanofi-aventis in recognition of their participation in the study....The sponsor of this study (sanofi-aventis) was involved in the study design, conduct, monitoring, data analysis, and preparation of the report.
In fairness, the authors do show that patients meeting their criteria tend to have characteristics typical of bipolar people. And they show that their system is at least as good as DSM-IV at picking out these cases:

For example, DSM-IV bipolar patients had a younger age of onset than DSM-IV depressed ones. "Bipolar specifier" patients did too, compared to the 53% who didn't meet the criteria. Same for a family history of manic symptoms, multiple episodes, and shorter episodes. All of those are pretty well established correlates of bipolar disorder.

That's fine, and the results are better than I expected when I picked up this paper. But all this shows us is that the bipolar specifier was no worse than the DSM-IV criteria as applied in this study.

It doesn't tell us whether either was any good.

DSM-IV criteria were used in a mechanical cookbook fashion - symptoms were assessed by the psychiatrist, written down, sent back to the study authors, who then diagnosed them if they ticked enough boxes. Is that a good approach? We don't know.

Most importantly, we have no idea whether these people would do better being treated as bipolar rather than as depressed. The difference being that bipolar people get mood stabilizers. Maybe these people would benefit from mood stabilizers, maybe not. Existing literature on mood stabilizers in bipolar people can't be assumed to generalize to these 47%.

In the discussion, the authors argue that antidepressants are not much good in bipolar people, whereas mood stabilizers are. Fun fact: Sanofi make many of the most popular formulations of valproic acid/valproate , a big selling mood stabilizer.

I think that is no coincidence. Maybe that sounds crazy, but hey, what do you expect? I'm bipolar.

ResearchBlogging.orgAngst J, Azorin JM, Bowden CL, Perugi G, Vieta E, Gamma A, Young AH, & for the BRIDGE Study Group (2011). Prevalence and Characteristics of Undiagnosed Bipolar Disorders in Patients With a Major Depressive Episode: The BRIDGE Study. Archives of general psychiatry, 68 (8), 791-798 PMID: 21810644

Friday, August 5, 2011

Science Without Method

Everyone knows that The Scientific Method is the key to doing science. No-one's quite sure what it is, but they know it's there, and it's something rather special.


It's not. When scientists sit down to work, we don't use "the scientific method" to make discoveries. We use microscopes, brain scanners, telescopes and particle detectors, all of which are just ways of looking at things. They're special in terms of what they let you look at, but that's it. Science is looking.

It's true that in order to do good science, you need to be careful. You need to avoid falling into various traps that lead to misleading data and false conclusions. You could call the care taken over scientific observations "The Scientific Method", and some people do, but that's misleading, because none of it is specific to science.

One of the most important considerations in science is making make sure that you have a proper control condition. This sounds technical, but all it really means is that you need to make sure that you really are looking at what you set out to observe.

To discover the effect of a drug on people, say, you just give them the drug and look to see what happens, using the appropriaye equipment. However, you need to compare this to an appropriate control, such as a placebo pill, because if you don't, you're not just seeing the effect of the drug, many other things as well, such as the placebo effect, the passage of time, random events.

In the same way, if you wanted to find out what happens when you push that little button on your TV remote, you wouldn't mash five other buttons at the same time. To discover what was in the top drawer of your dresser, you'd look there, not in the bottom drawer.

That's really all there is to it. It can be complicated to do this in practice, but the principle is that simple: you take care to look at what you're interested in.

It's said that part of the "Scientific Method" is forming hypotheses, or theories. Scientists do that, but so do we all, all the time. You might have a theory that your boss is an alcoholic, or that your husband is cheating on you, or that your car's spark plug is bust.

You might call these ideas, notions, hunches, suspicions, thoughts, fears, but they're still hypotheses about the world. Indeed, scientists often use those words too. One word is as good as another.

If your boss was an alcoholic, the way to prove it might be to somehow give him a breathalizer test after lunch, or sneak a peek at his credit card bill and see how much he spends on booze. That would be an observation to test your hypothesis, or in other words, an experiment (another formal word that scientists don't always use).

That's all science is. Looking at things carefully, getting ideas, and checking them out.

I said this in my last post, but it bears repeating: this is why most objections to, or concerns about, "science" or worse "modern science", fail. Any given scientist, or any given scientific theory, may be wrong, just like anyone or anything else. Yet to say that "Science can't" do something is saying that looking and thinking can't do it. To blame "Science" for something is to blame the human mind.

Note: This post is a follow-up to Science Doesn't Say, and the second in a three-part series.

Thursday, August 4, 2011

Brain-Modifying Drugs

What if there was a drug that didn't just affect the levels of chemicals in your brain, it turned off genes in your brain? That possibility - either exciting or sinister depending on how you look at it - could be remarkably close, according to a report just out from a Spanish group.

The authors took an antidepressant, sertraline, and chemically welded it to a small interfering RNA (siRNA). A siRNA is kind of like a pair of genetic handcuffs. It selectively blocks the expression of a particular gene, by binding to and interfering with RNA messengers. In this case, the target was the serotonin 5HT1A receptor.

The authors injected their molecule into the brains of some mice. The sertraline was there to target the siRNA at specific cell types. Sertraline works by binding to and blocking the serotonin transporter (SERT), and this is only expressed on cells that release serotonin; so only these cells were subject to the 5HT1A silencing.

The idea is that this receptor acts as a kind of automatic off-switch for these cells, making them reduce their firing in response to their own output, to keep them from firing too fast. There's a theory that this feedback can be a bad thing, because it stops antidepressants from being able to boost serotonin levels very much, although this is debated.

Anyway, it worked. The treated mice showed a strong and selective reduction in the density of the 5HT1A receptor in the target area (the Raphe nuclei containing serotonin cells), but not in the rest of the brain.

Note that this isn't genetic modification as such. The gene wasn't deleted, it was just silenced, temporarily one hopes; the effect persisted for at least 3 days, but they didn't investigate just how long it lasted.

That's remarkable enough, but what's more, it also worked when they administered the drug via the intranasal route. In many siRNA experiments, the payload is injected directly into the brain. That's fine for lab mice, but not very practical for humans. Intranasal administration, however, is popular and easy.

So siRNA-sertraline, and who knows what other drugs built along these lines, may be closer to being ready for human consumption than anyone would have predicted. However... the mouse's brain is a lot closer to its nose than the human brain is, so it might not go quite as smoothly.

The mind boggles at the potential. If you could selectively alter the gene expression of selective neurons, you could do things to the brain that are currently impossible. Existing drugs hit the whole brain, yet there are many reasons why you'd prefer to only affect certain areas. And editing gene expression would allow much more detailed control over those cells than is currently possible.

Currently available drugs are shotguns and sledgehammers. These approaches could provide sniper rifles and scalpels. But whether it will prove to be safe remains to be seen. I certainly wouldn't want to be first one to snort this particular drug.

ResearchBlogging.orgBortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M., Perales, J., Montefeltro, A., & Artigas, F. (2011). Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects Molecular Psychiatry DOI: 10.1038/mp.2011.92

Wednesday, August 3, 2011

Antipsychotics - The New Valium?

Antipsychotics, originally designed to control the hallucinations and delusions seen in schizophrenia, have been expanding their domain in recent years.

Nowadays, they're widely used in bipolar disorder, depression, and as a new paper reveals, increasingly in anxiety disorders as well.

The authors, Comer et al, looked at the NAMCS survey, which provides yearly data on the use of medications in visits to office-based doctors across the USA.

Back in 1996, just 10% of visits in which an anxiety disorder was diagnosed ended in a prescription for an antipsychotic. By 2007 it was over 20%. No atypical is licensed for use in anxiety disorders in the USA, so all of these prescriptions are off-label.

Not all of these prescriptions will have been for anxiety. They may have been prescribed to treat psychosis, in people who also happened to be anxious. However, the increase was accounted for by the rise in non-psychotic patients, and there was a rise in the rate of people with only anxiety disorders.

The increase was driven by the newer, "atypical" antipsychotics.

Whether the modern trend for prescribing antipsychotics for anxiety is a good or a bad thing, is not for us to say. The authors discuss various concerns ranging from the side effects (obesity, diabetes and more), to the fact that there have only been a few clinical trials of these drugs in anxiety.

But what's really disturbing about these results, to me, is how fast the change happened. Between 2000 and 2004, use doubled from 10% to 20% of anxiety visits. That's an astonishingly fast change in medical practice.

Why? It wasn't because that period saw the publication of a load of large, well-designed clinical trials demonstrating that these drugs work wonders in anxiety disorders. It didn't.

But as Comer et al put it:
An increasing number of office-based psychiatrists are specializing in pharmacotherapy to the exclusion of psychotherapy. Limitations in the availability of psychosocial interventions may place heavy clinical demands on the pharmacological dimensions of mental health care for anxiety disorder patients.
In other words, antipsychotics may have become popular because they're the treatment for people who can't afford anything better.

These data show that antipsychotics were over twice as likely to be prescribed to African American patients; the poor i.e. patients with public health insurance; and children under 18.

ResearchBlogging.orgComer JS, Mojtabai R, & Olfson M (2011). National Trends in the Antipsychotic Treatment of Psychiatric Outpatients With Anxiety Disorders. The American journal of psychiatry PMID: 21799067

Tuesday, August 2, 2011

The 30something Brain

Brain maturation continues for longer than previously thought - well up until age 30. That's according to two papers just out, which may be comforting for those lamenting the fact that they're nearing the big Three Oh.

This challenges the widespread view that maturation is essentially complete by the end of adolescence, in the early to mid 20s.

Petanjek et al show that the number of dendritic spines in the prefrontal cortex increases during childhood and then rapidly falls during puberty - which probably represents a kind of "pruning" process. That's nothing new, but they also found that the pruning doesn't stop when you hit 20. It continues, albeit gradually, up to 30 and beyond.

This study looked at post-mortem brain samples taken from people who died at various different ages. Lebel and Beaulieu used diffusion MRI to examine healthy living brains. They scanned 103 people and everyone got at least 2 scans a few year years apart, so they could look at changes over time.

They found that the fractional anisotropy (a measure of the "integrity") of different white matter tracts varies with age in a non-linear fashion. All tracts become stronger during childhood, and most peak at about 20. Then they start to weaken again. But not all of them - others, such as the cingulum, take longer to mature.

Also, total white matter volume continues rising well up to age 30.

Plus, there's a lot of individual variability. Some people's brains were still maturing well into their late 20s, even in white matter tracts that on average are mature by 20. Some of this will be noise in the data, but not all of it.

These results also fit nicely with this paper from last year that looked at functional connectivity of brain activity.

So, while most maturation does happen before and during adolescence, these results show that it's not a straightforward case of The Adolescent Brain turning suddenly into The Adult Brain when you hit 21, which point it solidifies into the final product,

ResearchBlogging.orgLebel C, & Beaulieu C (2011). Longitudinal development of human brain wiring continues from childhood into adulthood. The Journal of Neuroscience, 31 (30), 10937-47 PMID: 21795544

Petanjek, Z., Judas, M., Simic, G., Rasin, M., Uylings, H., Rakic, P., & Kostovic, I. (2011). Extraordinary neoteny of synaptic spines in the human prefrontal cortex Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1105108108

Monday, August 1, 2011

Understanding The Drug Abuse Process

Alcoholism is a really serious issue, and according to the American Medical Association, it is also a disease. It isn't a thing that can just disappear for good . without treatment. It must be treated as fast as possible and it'll need a good deal of work and continual vigilance even subsequent to finishing an alcohol rehab program to prevent any relapses.

Alcohol addiction is a chronic ailment that simply cannot be cured with a single visit to a counselor. The process will be very long and involved, and it could also be painful, but the damage that a one does to his own body as well as to friends and family members is often much worse.

Drug abuse and addiction literally triggers physical alterations in the brain and body. Ultimately a person starts to depend on these types of drugs to support normal systemic operation, and the brain won't know how to proceed without it.

This is why the best alcohol rehab program should include a time of detoxification as well as a continuous support system once the toxins are eliminated and the body begins to restore itself.

How do you know when someone will need to get into an alcohol rehab program? Alcohol usage is so prevalent and common that it can be hard for some people to realize that there is a problem. Is the friend that has a tendency to consume too much on evenings out in trouble? Are a few glasses of wine prior to going to bed excessive? What about a shot of whiskey in the morning coffee? The simplest way to find out may be to take away the alcohol from those situations and find out what happens.

Unfortunately, a person might know and understand they have a drinking problem but will never seek an alcohol rehab program. There was a well-known author who talked about his drinking problem and of the day he came to the realization that he was an alcoholic. It was a harsh realization, yet he did not think: "I better get help," he only thought: "I better be careful." He understood he had a problem, but he didn't believe he could live or write without the crutch of substance abuse.

Alcohol rehab, however, does not have to be voluntary to work. Regardless of whether a person is coerced or directed into a rehab program, there's a fairly high rate of success and many people will stay sober after completing the process. This is why interventions by friends and family can be step one for effectively conquering the addiction.

An alcohol rehab program will always begin with a detox period. A body needs to eliminate the chemicals that have been doing damage. Ending this physical addiction can be tough, and you'll encounter a number of withdrawal symptoms that may be quite severe based on how long and how much an individual was drinking.

When this physical problem is overcome, though, it is time to address the psychological effects. In order to avoid any relapses and be sure a person remains clean, many alcohol rehab programs offer continuous help and guidance to help them maintain their sobriety and begin a healthier and happier life.