Daily Bulletin


The Conversation

  • Written by Tim Dean, Editor, The Conversation

We take science seriously at The Conversation and we work hard at reporting it accurately. This series of five posts is adapted from an internal presentation on how to understand and edit science by Australian Science & Technology Editor, Tim Dean. We thought you would also find it useful.

The first two parts of this guide were a brief (no, seriously) introduction to what science is, how it works and some of the errors that can seep into the scientific process. This section will speak more explicitly about how to report, edit (and read) science, and some of the pitfalls in science journalism.

It’s primarily intended as a guide for journalists, but it should also be useful to those who consume science articles so you can better assess their quality.

What’s news?

The first question to ask when considering reporting on some scientific discovery is whether it’s worth reporting on at all.

If you randomly pick a scientific paper, the answer will probably be “no”. It doesn’t mean the study isn’t interesting or important to someone, but most science is in the form of incremental discoveries that are only relevant to researchers in that field.

When judging the broader public importance of a story, don’t only rely on university press releases.

While they can be a useful source of information once you decide to run a story, they do have a vested interest in promoting the work of the scientists at their institution. So they may be inclined to oversell their research, or simplify it to make it more likely to be picked up by a journalist.

In fact, there’s evidence that a substantial proportion of poorly reported science can be traced back to poorly constructed press releases. Many releases are accurate and well researched, but as with any press release, it’s worth double-checking their claims.

University communications teams also don’t necessarily do exhaustive homework on each study they write about, and can sometimes make inaccurate claims, particularly in terms of how new or unique the research is.

I once fielded a snarky phone call from a geneticist who objected to a story I wrote on the first-ever frog genome. Turns out it wasn’t the first ever. The geneticist had sequenced a frog genome a year prior to this paper. But “first ever” was in the university press release, and I neglected to factcheck that claim. My bad; lesson learned. Check your facts.

Impact and curiosity are not space probes

Broadly speaking, there are two kinds of science story: impact and curiosity.

Impact stories have some real-world effect that the reader cares about, such as a new drug treatment or a new way to reduce greenhouse gas emissions.

A curiosity story, on the other hand, has little or no immediate or direct real-world impact. These include just about every astronomy story, and things like palaeontology and stories about strange creatures at the bottom of the sea. Of course, such research can produce real-world benefits, but if the story is about those benefits, then it becomes an impact story.

The main difference between the two is in the angle you take on reporting the story. And that, in turn, influences whether the story is worth taking on. If there’s no obvious impact, and the curiosity factor is low, then it’ll be a hard sell. That doesn’t mean it’s not possible to turn it into a great yarn, but it just take more imagination and energy – and we all know they’re often in short supply, especially when deadlines loom.

image NASA and ESA are great sources for illustrative imagery, and astronomy, of course. Most are public domain or Creative Commons, so free to use with appropriate attribution. NASA/ESA

If the study looks like it has potential to become a good story, then the next thing to check is whether it’s of high quality.

The first thing to look for is where it was published. Tools like Scimago, which ranks journals, can be a helpful start.

If it’s published in a major journal, or a highly specialised one from a major publisher, at least you know it’s cleared a high peer-review bar. If you’ve never heard of the journal, then check Retraction Watch and Beall’s list for signs of dodginess.

If it’s a meta-review – a study that compiles the results of multiple other studies – such as those by Cochrane, then that makes it more reliable than if it’s a single standalone study.

As mentioned above, be wary of pre-press servers, as they haven’t yet been peer-reviewed. Be particularly wary of big claims made in pre-press papers. You might get a scoop, but you might also be publicising the latest zero point energy perpetuation motion ESP hat.

It’s not hard to Google the lead authors (usually those listed first and last in the paper’s author list). Check their profile page on their institution’s website. Check whether the institution is reputable. ANU, Oxford and MIT are. Upstate Hoopla Apologist College is probably not.

Check their academic title, whether they work in a team, and where they sit in that team. Adjunct, honorary and emeritus usually means they’re not actively involved in research, but doesn’t necessarily mean they aren’t still experts. You can also punch their name into Google Scholar to see how many citations they have.

You should also read the abstract and, if possible, the introduction and discussion sections of of the paper. This will give you an idea of the approach taken by the authors.

Red flags

While it’s unlikely that you’ll be qualified to judge the scientific merits of the study in detail, you can look for red flags. One is the language used in the study.

Most scientists have any vestige of personality hammered out of their writing by a merciless academic pretension that a dry passive voice is somehow more authoritative than writing like a normal human being. It’s not, but nevertheless if the paper’s tone is uncomfortably lively, vague or verging on the polemical, then treat it with suspicion.

You can also look for a few key elements of the study to assess its quality. One is the character of the cohort. If it’s a study conducted on US college students (who are known to be “WEIRD”), don’t assume the results will generalise to the broader population, especially outside of the United States.

Another is sample size. If the study is testing a drug, or describing some psychological quirk, and the sample size is under 50, the findings will not be very strong. That’s just a function of statistics.

If you flip a coin only 10 times and it comes up heads 7 times (the p-value, or chance of it coming up 7, 8, 9 or 10, is 0.17, so not quite “significant”), it’s a lot harder to be confident that the coin is biased compared to flipping it 100 times and it coming up heads 70 times (p-value 0.000039, or very very “significant”).

Also check what the study says about causation. Many studies report associations or correlations, such as that college students who drink and smoke marijuana tend to have lower grades than their peers. But correlation doesn’t imply causation.

It might be that there is a common cause for both phenomena. Perhaps those students who are more likely to choose to drink and smoke are predisposed towards distraction, and it’s the distraction that causes the lower grades rather than the content of the distraction per se.

So never imply causation when a study only reports correlation. You can speculate as to causation – many studies do – but do so in context and with appropriate quotes from experts.

Many studies are also conducted on animals, especially medical studies. While it’s tempting to extrapolate these results to humans, don’t.

It’s not the case that we’ve cured cancer because a drug made it disappear in a mouse. It’s not even the case that we’ve cured cancer in mice (which would still be big news in some circles).

What we’ve found is that application of some drug corresponded with a shrinkage of tumours in mice, and that’s suggestive of an interesting interaction or mechanism that might tell us something about how the drug or cancers work, and that might one day inform some new treatment for cancers in people. Try fitting that into a pithy headline. If you can’t, then don’t overhype the story.

Many impact stories also have a long wait until the impact actually arrives. Be wary of elevating expectations by implying the discovery might start treating people right away. Optimistically, most health studies are at least ten years away from practical application, often more.

image Generic telescope image set on picturesque Australian outback background. It’s actually CSIRO’s ASKAP antennas at the Murchison Radio-astronomy Observatory in Western Australia. CSIRO has a great image library. Neal Pritchard

Sources

It’s good practice to link to sources whenever you make an empirical claim. But don’t just Google the claim and link to the first paper or news story you find. Make sure the source backs the claim, and not just part of it. So don’t link to something saying people generally overestimate risk and then link to one paper that shows people overestimate risk in one small domain.

When linking to a source, preferably use the DOI (Digital Object Identifier). It’s like a URL for academic papers, and when you link to it, it will automatically shunt the reader through to the paper on the journal site.

DOIs usually come in the form of a bunch of numbers, like “10.1000/xyz123”. To turn that into a full DOI link, put “https://doi.org/” at the beginning. So “5.771073” becomes https://doi.org/10.1109/5.771073. Go on, click on that link.

As a rule, try to link directly to the journal article rather than a summary, abstract, PubMed listing, blog post or review of the paper elsewhere. Don’t link to a PDF of the paper on the author’s personal website unless the author or the journal has given you explicit permission, as you may be breaching copyright.

And definitely avoid linking to the university press release for the paper, even if that release has the paper linked at the bottom. Just link the paper instead.

This article was updated with corrected p-values on the coin flip example. Thanks to Stephen S Holden for pointing out the error, and for highlighting both the difficulty of statistics and the importance of double checking your numbers.

Authors: Tim Dean, Editor, The Conversation

Read more http://theconversation.com/how-we-edit-science-part-3-impact-curiosity-and-red-flags-74548

Writers Wanted

How will sharks respond to climate change? It might depend on where they grew up

arrow_forward

Mistakes to avoid when Handling Academic Writing

arrow_forward

The Conversation
INTERWEBS DIGITAL AGENCY

Politics

Prime Minister Interview with Ben Fordham, 2GB

BEN FORDHAM: Scott Morrison, good morning to you.    PRIME MINISTER: Good morning, Ben. How are you?    FORDHAM: Good. How many days have you got to go?   PRIME MINISTER: I've got another we...

Scott Morrison - avatar Scott Morrison

Prime Minister Interview with Kieran Gilbert, Sky News

KIERAN GILBERT: Kieran Gilbert here with you and the Prime Minister joins me. Prime Minister, thanks so much for your time.  PRIME MINISTER: G'day Kieran.  GILBERT: An assumption a vaccine is ...

Daily Bulletin - avatar Daily Bulletin

Did BLM Really Change the US Police Work?

The Black Lives Matter (BLM) movement has proven that the power of the state rests in the hands of the people it governs. Following the death of 46-year-old black American George Floyd in a case of ...

a Guest Writer - avatar a Guest Writer

Business News

Nisbets’ Collab with The Lobby is Showing the Sexy Side of Hospitality Supply

Hospitality supply services might not immediately make you think ‘sexy’. But when a barkeep in a moodily lit bar holds up the perfectly formed juniper gin balloon or catches the light in the edg...

The Atticism - avatar The Atticism

Buy Instagram Followers And Likes Now

Do you like to buy followers on Instagram? Just give a simple Google search on the internet, and there will be an abounding of seeking outcomes full of businesses offering such services. But, th...

News Co - avatar News Co

Cybersecurity data means nothing to business leaders without context

Top business leaders are starting to realise the widespread impact a cyberattack can have on a business. Unfortunately, according to a study by Forrester Consulting commissioned by Tenable, some...

Scott McKinnel, ANZ Country Manager, Tenable - avatar Scott McKinnel, ANZ Country Manager, Tenable



News Co Media Group

Content & Technology Connecting Global Audiences

More Information - Less Opinion