Divorce prevents homosexuality

A controversial new study from a researcher in Christchurch New Zealand shows that if you are a single parent, your kids are 10% less likely to be gay. If you remarry and raise a child with an opposite sex step parent, then the chances of the child being gay are halved compared to if the child was raised by its biological parents.

You didn’t hear about that? Oh I suppose you only heard about the bit where gay people were more likely to have been abused during childhood. Well, I have actually read the study and actually understand the stats and I can tell you why you didn’t see my headline above. It’s because that finding didn’t fit in with the world view of the researchers running the study, so they chose to gloss over it. After all, we all know that divorce and the “breakdown of family values” is the cause of all the ills in the world and, homosexuality being an ill, it can’t possibly be that single parents or different-sex step-parents are less likely to cause gays, can it?

What’s going on? Is this study giving the wrong answers or is it asking the wrong questions? I argue it’s a bit of both.

Firstly, let’s address the elephant in the room: causality. Even if the stats were 100% gold standard water-tight (which they most certainly are not) you still can’t draw the conclusion that just because gays reported a higher incidence of abuse in childhood, that abuse causes homosexuality. Let’s try it another way: If I crunched the numbers I could show that Maori kids are more likely to experience domestic violence at home. Does this mean that domestic violence makes you Maori?

Think about it.

Also (I have to firmly point out) that being Maori in and of itself is not necessarily the reason Maori kids experience more domestic violence.

These correlations (if they ever were accurate correlations at all) are most likely caused by some unknown variable, known as a confounding factor. In the case of the Maori example, the confounding factor is most likely socio-economic factors such as family income and education.

Now, this researcher, like all empirical researchers, added statistical groupings in to protected against the confounding effects of age, sex (male/female) and age*sex interaction. They didn’t account for any number of other confounding factors, though. Besides which, the simplest retort is that gay kids could quite easily be the target of abuse simply because they are gay (reverse causality).

Ok, now that we’ve talked causality, let’s look at the actual numbers used in the study: 12,992 people were surveyed in the New Zealand national mental health survey. Of these, only 0.6% identified as bisexual and 0.8% homosexual. That’s 101 bisexual people and 106 gay people. Interestingly, 0.4% (about 50 people) reported “other” or “don’t know” as an answer to that question and were excluded from the analysis.

Hmm, at this point common sense should be kicking in: how can we compare the thousands of straight people to the 106 gay people and make confident claims about the differences between the groups for the rest of the country? The effect of one or two anomalies in the gay population will have a much bigger effect than the same in the straight population. The researcher attempts to do this by running a linear regression with covariates for sex (male/female), and sex * age interaction. So far so good. She then calculates what’s known as an odds ratio, which is often used in clinical trials to determine the “risk” of certain outcomes based on certain treatments.

The weak thing about her statistics is buried in the text in a little white lie about “confidence intervals”. In most of the cases, where she’s trying to make a statement like “homosexuals are more likely to have been abused as a child” what she should say is “with 95% confidence, homosexuals are more or less likely to have been abused as a child” which, as you know, means absolutely nothing.

A confidence interval measures the uncertainty we have about generalizing a finding to a wider population given our sample size. A wide confidence interval (such as the ones in this study) indicate that the sample size is too small to have any useful statistical power, and so more samples should be collected. An odds ratio confidence interval including the value “1” means: “we can’t say either way” or “too close to call”. An ethical researcher would have made that quite clear.

Looking at the raw stats, as a researcher I would be kind of disappointed. Only one of the stats relating to gay people and adverse events during childhood “treatment” had  a confidence interval  that didn’t include 1 (this was sexual assault with values of 1.4 to 5), still rather tenuous if you ask me, given the fact that the confidence interval is so broad. I am happy for her to report on those stats where there actually is statistical power, but the media have been erroneously treating every one of her claims as statistically significant.

Add to this the fact that the researcher explicitly chose not to differentiate the 18% of gay people who stated they never had gay sex, and the whopping 30% of bisexuals who reported the same and the conclusions the researcher makes become very shaky indeed. She is conflating two of the three components of sexuality: sexual identity and sexual behavior and treating them as the same thing, while at the same time appealing to these constructs to validate her research.

So, why am I annoyed? Researchers into the effects of drugs do this all the time: lies, damn lies and statistics as they call it. Well, soon after this researcher got her 15 minutes on TV, my mum called me, one of her half-pleading questions was “what did we do wrong?”

Also, a friend of mine is a gay man who was abused as a child. He knew he was gay before he was abused, but at least one doctor has said to him that he was gay because of the abuse. It’s an insidious thought, a compelling one, because if homosexuality is an effect of some negative event, then if we can treat it, we can cure it and make people normal again.

My advice: sharpen your pencil and get a bigger sample size if you want to make the generalizations you claim. Better yet, have a think about the questions you’re asking and try see if maybe the questions even make sense.