Short, simple abstracts… aren’t cited as much as those with abstracts tending to maximize the high-verbosity quotient.

Laboratory Equipment has sad news for those of us who like straight, simple, elegant communication. It appears that scientific articles with abstracts packed with (unnecessary and obfuscatory) jargon are cited more often:

The study, published in PLoS Computational Biology, suggests that most general writing rules are not as effective in scientific publications, which may reflect the influence of online search upon how science is discovered and consumed today.

“What I think is funny is there’s this disconnect between what you’d like to read, and what scientists actually cite,” said Stefano Allesina, senior author of the study. “It’s very suggestive that we should not trust writing tips we take for granted.”

During a seminar for incoming graduate students on how to write effective abstracts, Allesina wondered whether there was hard evidence for the “rules” that were taught. So Allesina and Cody Weinberger, a University of Chicago undergraduate, gathered hundreds of writing suggestions from scientific literature and condensed them into “Ten Simple Rules,” including “Keep it short,” “Keep it simple,” “Signal novelty and importance,” and “Show confidence.”

The authors, which also include University of Chicago associate professor of sociology, Computation Institute Senior Fellow and director of Knowledge Lab James Evans, then collected one million abstracts from disciplines such as biology, chemistry, geology and psychology, and tested how the above rules affected a paper’s citations, relative to other papers in the same journal. For example, testing “Keep it short,” looked at the relationship between the number of words or sentences in an abstract and subsequent citations.

This particular analysis found that shorter abstracts led to fewer citations across all disciplines tested — a refutation of the idea that “brief is better.” Other tests found that using more adjectives, adverbs, uncommon words, signals of novelty and importance and “pleasant” words boosted citations, despite frequent warnings or rules against using each of these features.

….

Now, what’s interesting here is the *why*. The researchers suspect this isn’t because readers search out the wordy and overly technical. It’s because most people looking for articles do so by using computers – they search an online database. And the more terms an author uses to describe an experiment, the more likely that experiment will turn up in search results.

The test for this, I suppose, would be in writing an abstract the second half of which was entirely (and nonsensically) keywords, like an old-style HTML header: research study “science project” “p-value” stats statistics “statistical analysis” regression quantitative words words MORE WORDS!