If you use drugs, you must be very naive or very principled to answer a survey honestly

The drug numbers: Not as easy to measure as you think. MA Cabrera Luengo, CC BY-SA

Two different sets of statistics on Scottish illegal drug use have been published in the past few days – the Scottish drug misuse database (SDMD) on initial assessments for specialist drug treatment; and the Scottish crime and justice survey: drug use (SCJS) which examines self-reported illicit drug use by adults aged over 16 in Scotland.

The Scottish drug misuse database reports that the number of users entering treatment currently stands at a rate of 222 per 100,000 population. The number peaked at 246 six years ago, but has fluctuated around 220 in recent years. In contrast the Scottish crime and justice survey reports a continuous decline in the proportion of adults admitting to using illegal drugs, from 7.6% in 2008/09 to just 6.2% in the most recent findings.

Looking for clues

Why might trends in numbers of recorded drug treatment clients and self-reported users differ? A clue to one explanation comes from their age profiles. The treatment figures report record proportions of total assessments over 40 (and 66% of all assessments over 30). And while in contrast the survey figures are highest among young people (aged 16-24), this is also the group showing the greatest decline (from 23.5% to 16.4% in five years).

Another clue might be from the drugs involved, with heroin being most prevalent drug in the treatment data, cannabis in the survey. These reports may be describing different populations. We may be witnessing a long-term decline in drug use among young people, and a time lag-effect among those who began use during an era of higher youth drug use now ageing out into treatment. Although both of these observations may be true, but it is unlikely to be whole story.

These data sets illustrate the problems researchers face when interpreting evidence of hidden behaviours. If drug use is illegal, and problem use is stigmatised, then estimations of their prevalence can seem “doubly dark figures”. Like most official records the SDMD data is perhaps the more robust. Even though treatment figures might reflect willingness to present, or availability of services, we can know how many (problem) users enter treatment. We cannot know how many (less problematic) users are in the population.

Snorts about self-reports

One method of estimating general population drug use is by self-report household survey, but, as the SCJS acknowledges, this has limitations. Firstly sampling – household surveys miss groups lacking a fixed address (homeless, prisoners, hospitalised, armed services, students in halls, and so on). Many of these are sub-populations who may be particularly at risk. This might explain why the British Crime Survey 2001/02 (BCS) found England’s northwest (including Manchester and Liverpool) to have zero heroin or crack users.

A second more serious concern with surveys is under-reporting. It is possible to catch out those who pretend to take drugs, and one must wonder what would motivate such pretence (SCJS asks questions about a fake drug – anyone answering positively about it has their survey results discarded). But we must equally wonder why anyone would admit to using drugs, with all the losses such a confession might entail, when it is easier just to say no to the researcher. Such denial is more difficult to spot, albeit there can be patterns in the data that can provide clues.

Surveys like the SCJS and BCS regularly find that drug use is related to housing tenure, with more being reported in the private rented sector. The least are found in owner occupied properties, with social/council in the middle (the figures for this SCJS were 12.2%, 3.5% and 10.5% respectively). This hints that the more you have to lose, the less likely you are to risk it. Australian urinalysis research found under-reporting to be greatest among those who live in their own home, are employed full-time, and aged over-30.

Closer to home, according to the Scottish Crime Survey, those claiming they had ever used drugs stood at 46% among 20-24 year-olds in 1996, but advanced to only 35% among 25-29 year-olds in 2000. This hinted at a “maturity recanting effect” and challenges the declines by age found in surveys. Of course these data were from the era of record reported usage, and people who believe in self-report studies maintain that if the method is the same then we will know whether the trend is up or down. But do we?

Would you answer questions on tablets? epsos.de, CC BY-SA

Using tablets to ask about tablets

In 1996 data was not recorded on a tablet like it is by the SCJS now. We are in an era of scares about hacking, internet-snooping and past social-media indiscretions costing careers. We are talking about a government-sponsored crime survey within your own home, answered by a single touch of the screen. If you used drugs today, you would have to be either very naive or very principled to answer honestly. As one drugs researcher put it a few years ago, “users of such surveys need to continually ask whether reported differences in drug use reflect actual differences in use or differences in willingness-to-disclose use”.

Classroom surveys, where the researcher doesn’t actually know where participants live, tend to report more drugs use. And if you are conducting anonymous street research, the task can become finding non-using control samples. But then you have the problem that more honest disclosure about drugs can be offset by less honest/non-disclosure of contact details. Follow-up face-to-face interviews can build a rapport with participants encouraging disclosure. In sum, the context and method of drugs research can dictate its findings.

Last week saw #supportdontpunish drug users day. Perhaps such campaigning will help to improve disclosure, as would decriminalisation. It is not the case that people are entirely accurate when self-reporting their alcohol consumption either, but that’s another story.