Category Archives: Science Practice

The Bezos Scholars Program at the World Science Festival

The World Science Festival is a place where one goes to see the giants of science, many of whom are household names (at least in scientifically inclined households) like E. O. Wilson, Steven Pinker and James Watson, people on top of their game in their scientific fields, as well as science supporters in other walks of life, including entertainment—Alan Alda, Maggie Gullenhal and Susan Sarandon were there, among others—and journalism (see this for an example, or check out more complete coverage of the Festival at Nature Network).

With so many exciting sessions, panels and other events at the Festival, it was hard to choose which ones to attend. One of the events I especially wanted to see centered on the other end of the spectrum—the youngest researchers, just getting to taste the scientific life for the first time in their lives.

On the morning of Saturday the 4th, four high school seniors from New York schools presented their research at the N.Y.U. Kimmel Center. This is the second year that the project, The Bezos Scholars Program, sponsored jointly by the Bezos Family Foundation and the World Science Festival, took place.

Each student starts the program as a high school junior and, with mentoring by a science teacher and a scientist or engineer in the community, spends a year working on the project. At the end of the year, the students get to present their findings at the Festival and also get to meet the senior scientists, attend other events, all expenses paid by the Bezos Family.

The event, so far, has not been broadly advertised by the Festival probably to avoid having crowds in the thousands assembling to give the students stage fright. Still, the room was filled by dozens of local scientists, writers and educators and the students certainly did not disappoint.

It is important to note here that a big part of organization, coordination and coaching was done by Summer Ash (see also).

The projects

To summarize the research projects, I asked Perrin Ireland to provide cartoon versions of the presentations. Perrin Ireland is a graphic science journalist who currently serves as Science Storyteller at AlphaChimp Studio, Inc. She uses art and narrative to facilitate scientists sharing their stories, and creates comics about the research process.

More importantly for us here, unlike most of us who write notes when attending presentations, Perrin draws them. You can find more of Perrin’s work at Small and Tender, and follow her on Twitter at @experrinment.

“Aluminum Ion-Induced Degeneration of Dopamine Neurons in Caenorhabditis elegans”

First up was Rozalina Suleymanova from Bard High School Early College Queens. Her teacher is Kevin Bisceglia, Ph.D. and her mentor is Dr. Maria Doitsidou from The Hobert Laboratory in the Department of Biochemistry & Molecular Biophysics at Columbia University Medical Center.

Aluminum is found in brain tissues of Alzheimers’ patients. It is reasonable to hypothesize that aluminum can also affect neurons in other neurodegenrative diseases such as Parkinson’s. In Parkinson’s, it is the neurons that secrete dopamine that are affected.

Human brains are large and complex, but the nematode worm Caenorhabditis elegans has a simple nervous system in which every individual neuron (out of a total of 302) is known – where it is, what neurotransmitter it uses, and what function it performs. It is also an excellent laboratory model organism, with easy husbandry and breeding, short lifespan, and genetic techniques in place.

What Rozalina Suleymanova did was make a new strain of C.elegans in which only the eight dopamine-releasing neurons express green fluoresecent protein which allowed her to see them under a epiflorescence microscope.

She then exposed the worms to different doses of aluminum(III) in the form of AlCl3 either as acute exposure (30 minutes of high concentration) or as chronic exposure (12 days of continuous exposure of lower concentration).

Under the acute regimen, some worms died (how many – depended on the concentration). But the worms that survived showed no changes in the dopamine neurons. Under chronic exposure, all worms survived and only a very small proportion (not different from chance) showed some minor changes in the dopamine neurons. Thus, essentially negative results (hard to publish), but excellent work!

“The Structural Stability of Trusses”

Next up was Matthew Taggart from the NYC LAB School for Collaborative Studies, his teacher and Ali Kowalsky and his mentors Jeremy Billig, P.E., Senior Engineer at McLaren Engineering Group in NYC.

He used a program called Risa3D to build virtual bridges. The program enabled him to test the design of bridges built of iron trusses.

By varying heights (‘depth’) and widths (‘span’) of trusses and applying vertical downward force onto them until they broke, he discovered that it is the height-width ratio, not either one of the dimensions alone, that determines the strength and resistance of this kind of bridge design.

Needless to say, these kinds of calculations are performed during the process of actual design of infrastructure – using computer program first, verifying by hand calculations second, then doing test designs before starting the real construction.

“Identifying Presence of Race Bias Among Youth”

Saba Khalid from the Brooklyn Technical High School was the third student researcher up on stage, accompanied by her teacher Janice Baranowski, and her mentor Dr. Gaëlle C. Pierre from the Department of Psychology at NYU School of Medicine.

She devised a questionnaire, based on some older literature on race perception, and distributed it to the students at her school. Each question showed five pictures of dolls, each with a different skin tone, and asked which of the five dolls is most likely to be working in a particular profession.

Saba Khalid then analyzed the data correlating the responses to the race/ethnicity of the responder, to their socio-economic status and other parameters.

Out of many different responses, Saba Khalid pointed out three examples that are in some way typical. For one, respondents of all races predominantly pointed the darkest doll as a likely employee in a fast-food restaurant. At the other end, most respondents of all races chose the lightest doll for the profession of a pilot. Interestingly, for the profession of a teacher, the answers were quite evenly spread, with some tendency for respondents to pick a doll closest to their own skin color.

“Proactive and Reactive Connection Relevance Heuristics In A Virtual Social Network”

Finally, Tyler A. Romeo from the Staten Island Technical High School took stage. His teacher is Frank Mazza and his mentor is Dr. Dennis Shasha from the Department of Computer Science at the Courant Institute of Mathematical Sciences at NYU.

There are two ways an online service can make recommendations to its users. One method tracks the user’s prior choices and recommends items that are similar in some way. Think of Amazon.com recommending books similar to the books you have looked at or ordered. The other method is collaborative filtering – the site recommend items that other users who are similar to you have liked in the past.

What Tyler Romeo did was to recruit eight volunteers from his school who are active Facebook users, and wrote an app for them to install. The app analyzed prior behavior of these users as to which items they found interesting (by commenting or “Like”-ing) using the type and length (but not content) of the post as a key parameter. Tyler then used a support vector machine to predict which new items on the participants’ Facebook walls would be considered interesting by others.

What Tyler concluded was that a support vector machine may be able to predict which posts users will find interesting. Also, “cleaning up” the Facebook Walls to include only the “interesting” posts improved the overall quality of the posts compared to a random feed, which can possibly lead to an improved experience for the user.

***

After the event, several of us in the audience concluded that the quality of the work we just saw was definitely higher than expected for high school – college level for sure, and the Nematode work probably as good as a Masters project. Also, the way they did the presentations gave us confidence to ask tough questions and not to treat them too gently just because they are so young.

And it is there, during the Q&A sessions, where they really shone and showed that they truly own their research and are not just well coached by Summer Ash and their mentors. They understood all questions, addressed every component of multi-component questions, demonstrated complete grasp of the issues, and always gave satisfactory answers (and yes, sometimes saying “I don’t know” is a satisfactory answer even if you are much older than 18 and not just entering the world of science).

They identified weaknesses in their experiments, and suggested good follow-up experiments for the future. I was deeply impressed by their focus and presence of mind – I know for myself how hard it is to do a good Q&A session after giving a presentation. They are definitely going places – I hope they choose careers in science as they have what it takes to succeed.

***

My first thought, after being so impressed by the presentations, was: why only four students? There must be many more talented students in New York schools, with aptitude for and interest in science and engineering.

Finding the right match between three very busy people—the student, the teacher and the mentor—and then coordinating their times and sustaining the work and enthusiasm for an entire year must be quite a challenge.

I am wondering how much a program like this can be scaled up to include more students. Also, having such a program in a city that is smaller, slower, less competitive than New York City, where fewer such educational organizations may exist but are more likely to see each other as collaborators than competitors, may be easier. It would be interesting to see how well similar programs do in other places. But for now, clearly, New York City takes the lead. Great job!

Stories: what we did at #WSF11 last week

As you probably know, I spent last week in New York City, combining business with pleasure – some work, some fun with friends (including #NYCscitweetup with around 50 people!), some fun with just Catharine and me, and some attendance at the World Science Festival.

My panel on Thursday afternoon went quite well, and two brief posts about it went up quickly on Nature Network and the WSF11 official blog.

But now, there is a really thorough and amazing piece on it, combining text by Lena Groeger (who also did a great job livetweeting the event) with comic-strip visualization of the panel by Perrin Ireland – worth your time! Check it out: All about Stories: How to Tell Them, How They’re Changing, and What They Have to Do with Science

More about the trip and the Festival still to come…

Update: See also coverage at Mother Geek.

Scientific Communication all-you-can-eat Linkfest

About a week ago, Catherine Clabby (editor at American Scientist), Anton Zuiker and I did a two-day workshop on science communication with the graduate students in the Biology Department at Wake Forest University in Winston-Salem, NC. Here are some of the things we mentioned and websites we showed during those two days.

Links shown by Anton for the personal web page session:

Official homepage of Aaron Martin Cypess, M.D., PH.D.
Stanford Medicine faculty profiles
Web Pages That Suck
Anton Zuiker (old homepage)
Biology: Faculty at Wake Forest
Official homepage of Thomas L. Ortel, MD, PhD
Official homepage of Matthew Hirschey
About.me
Joe Hanson’s About.me page
Jakob Nielsen’s Utilize Available Screen Space
Official homepage of Jacquelyn Grace
Laboratory and Video Web Site Awards
Web Style Guide

Link to the step-by-step page for creating a WordPress blog:

Simple exercises for creating your first blog

Links shown during the social media session:

Anton’s Prezi presentation
Delicious link sharing
Twitter and a tweet
Facebook – you know it, of course. Here’s the fish photo
LinkedIn
Tumblr
Posterous
Bora’s take on Tumblr and Posterous

==========================

From Cathy Clabby:

References:

Good books on writing well:

Writing Tools: 50 Essential Strategies for Every Write by Roy Peter Clark
On Writing Well by William K. Zinsser
Eats Shoots & Leaves by Lynne Truss.
The Elements of Style by William Strunk and E.B. White

Excellent articles on how to avoid gobbledygook when writing about science:

Deborah Gross and Raymond Sis. 1980. Scientific Writing: The Good, The Bad, and The Ugly. Veterinary Radiology
George Gopen and Judith Swan. 1990 The Science of Scientific Writing. American Scientist

Web resources for good-writing advice:

Websites with smart writing advice:

Roy Peter Clark from the Poynter Institute offers these 50 “quick list” writing tools.
Purdue University’s OnLine Writing Lab
Carl Zimmer’s banned words (updated regularly on The Loom, his Blog)

You are what you read:

Newsstand magazines with excellent science writing:

The New Yorker
Discover
National Geographic
Scientific American
American Scientist
Outside

Books featuring clear, vivid science writing:

The Beak of a Finch by Jonathan Weiner
The Map that Changed the World by Simon Winchester
Lives of a Cell: Notes of a Biology Watcher by Lewis Thomas
Galileo’s Daughter by Dava Sobel
The Making of the Atomic Bomb by Richard Rhodes
The Emperor of All Maladies by Siddhartha Mukherjee
The Best American Science Writing (a yearly anthology with a changing cast of guest editors)
The Best American Science and Nature Writing (another yearly anthology)

==========================

From Bora:

Workshop on conferences in the age of the Web:

How To Blog/Tweet a Conference:
How To Blog a Conference
On the challenges of conference blogging
What a difference a year makes: tweeting from Cold Spring Harbor

How to present at a conference mindful of Twitter backchatter:

How the Backchannel Has Changed the Game for Conference Panelists
On organizing and/or participating in a Conference in the age of Twitter

Icons to put on your slides and posters:

Creating a “blog-safe” icon for conference presentations: suggestions?
CameronNeylon – Slideshare: Permissions
Andy and Shirley’s new ONS Logos

A good recent blog post about the changes in the publishing industry (good links within and at the bottom):
Free Science, One Paper at a Time

Open Notebook Science:
Open Notebook Science
UsefulChem Project
Open Science: Good for Research, Good for Researchers?

A little bit of historical perspective on science, science journalism, blogging and social media (and you can endlessly follow the links within links within links within these posts):
The line between science and journalism is getting blurry….again
Why Academics Should Blog: A College of One’s Own
The Future of Science
Visualizing Enlightenment- Era Social Networks
“There are some people who don’t wait.” Robert Krulwich on the future of journalism
A Farewell to Scienceblogs: the Changing Science Blogging Ecosystem
New science blog networks mushroom to life
Science Blogging Networks: What, Why and How
Web breaks echo-chambers, or, ‘Echo-chamber’ is just a derogatory term for ‘community’ – my remarks at #AAASmtg
Is education what journalists do?
All about Stories: How to Tell Them, How They’re Changing, and What They Have to Do with Science
Telling science stories…wait, what’s a “story”?
Blogs: face the conversation
Identity – what is it really?
Books: ‘Reinventing Discovery: The New Era of Networked Science’ by Michael Nielsen
#scio12: Multitudes of Sciences, Multitudes of Journalisms, and the Disappearance of the Quote.

Where to find science blogs (and perhaps submit your own blog for inclusion/aggregation):
ScienceBlogging.org
ScienceSeeker.org
ResearchBlogging.org

A blog about science blogging, especially for scientists – well worth digging through the archives:
Science of Blogging

A blog post about science that was inspired by a previous post on the same blog:
1000 posts!

A blog post about the way a previous blog post put together a researcher and a farmer into a scientific collaboration:
Every cell in a chicken has its own male or female identity
In which I set up a collaboration between a biologist, a farmer and a chimeric chicken

A blog post demonstrating how to blog about one’s own publication:
The story behind the story of my new #PLoSOne paper on “Stalking the fourth domain of life”
Comments, Notes and Ratings on: Stalking the Fourth Domain in Metagenomic Data: Searching for, Discovering, and Interpreting Novel, Deep Branches in Marker Gene Phylogenetic Trees

Another example:
Comments, Notes and Ratings on: Order in Spontaneous Behavior
Paper explained in video at SciVee.tv
Author’s blog and site. See some more buzz.

Collection of links showing how Arsenic Life paper was challenged on blogs:
#Arseniclife link collection

A blog post about a scientific paper that resulted from a hypothesis first published in a previous blog post:
Does circadian clock regulate clutch-size in birds? A question of appropriateness of the model animal.
My latest scientific paper: Extended Laying Interval of Ultimate Eggs of the Eastern Bluebird

A post with unpublished data, and how people still do not realize they can and should cite blog posts (my own posts have been cited a few times, usually by review papers):
Influence of Light Cycle on Dominance Status and Aggression in Crayfish
Circadian Rhythm of Aggression in Crayfish

Good blogs to follow the inside business of science and publishing:
Retraction Watch
Embargo Watch
DrugMonkey and DrugMonkey

Who says that only young scientists are bloggers (you probably studied from his textbook):
Sandwalk

My homepage (with links to other online spaces) and my blog:
Homepage
Blog
Twitter
Facebook

How to find me on Scientific American:
A Blog Around The Clock

Scientific American and its blogs (new blog network, with additional blogs, will launch soon) and social networks:
Scientific American homepage
Scientific American blogs
Scientific American Facebook page
Scientific American official Twitter account
Scientific American MIND on Twitter
Scientific American blogs on Twitter

Cool videos:
Fungus cannon
Octopus Ballet
The Fracking Song

Why blog?
Science Blogs Are Good For You
To blog or not to blog, not a real choice there…
Bloggers unite
Scooped by a blog
Scientists Enter the Blogosphere
“Online, Three Years Are Infinity”
Studying Scientific Discourse on the Web Using Bibliometrics: A Chemistry Blogging Case Study
The Message Reigns Over the Medium
Networking, Scholarship and Service: The Place of Science Blogging in Academia

Great series of post about scientists using blogs and social media by Christie Wilcox:

Social Media for Scientists Part 1: It’s Our Job
Social Media for Scientists Part 2: You Do Have Time.
Social Media for Scientists Part 2.5: Breaking Stereotypes
Social Media For Scientists Part 3: Win-Win

Why use Twitter?

What is Twitter and Why Scientists Need To Use It.
Twitter: What’s All the Chirping About?
Social media for science: The geologic perspective
Why Twitter can be the Next Big Thing in Scientific Collaboration
How and why scholars cite on Twitter
Researchers! Join the Twitterati! Or perish!
Twitter for Scientists
PLoS ONE on Twitter and FriendFeed

Some good Twitter lists and collections/apps:
Attendees at ScienceOnline2012
Scientific American editors, writers and contributors
SciencePond
The Tweeted Times

Some interesting Twitter hashtags:
#scio12 (chatter about ScienceOnline conference, and discussions within that community)
#scio13 (people already talking about next year’s event)
#SITT (Science In The Triangle, NC)
#madwriting (writing support community)
#wherethesciencehappens (pictures of locations where science happens)
#icanhazpdf (asking for and receiving PDFs of papers hidden behind paywalls)
#scimom – scientists and mothers and scientist-mothers.
#scienceblogging
#sciwri – science writing
#sciart – science and art
#histsci – history of science
#IamScience – a great initiative, see: original blog post, Storify, Tumblr, Kickstarter – and see the related Tumblr: This Is What A Scientist Looks Like

The Open Laboratory anthology of science blogging:
The Open Laboratory – what, how and why
The Open Laboratory at Lulu.com
The Open Laboratory 2011 updates
A couple of Big Announcements about The Open Laboratory

ScienceOnline conferences:
ScienceOnline2011
ScienceOnline2012
ScienceOnline2011 programming wiki
ScienceOnline2012 programming wiki
ScienceOnline2012 homepage
ScienceOnline2012 official blog
ScienceOnline2012 coverage blog
ScienceOnline2012 organizing wiki
ScienceOnline2012 blog and media coverage
ScienceOnline2013 organizing wiki
ScienceOnline participants’ interviews

Probably the best and most current book on science communication for scientists is ‘Explaining Research‘ by Dennis Meredith – see the book homepage and the associated blog for a wealth of additional information and updates.

Probably the best book for preparing oral (and to a smaller degree poster) presentations is Dazzle ‘Em With Style: The Art of Oral Scientific Presentation by Robert Anholt.

For posters, dig through the archives of this blog:
Better Posters

The line between science and journalism is getting blurry….again

 

Human #1: “Hello, nice weather today, isn’t it?”

Human #2: “Ummm…actually not. It’s a gray, cold, windy, rainy kind of day!”

Many a joke depends on confusion about the meaning of language, as in the example above. But understanding the sources of such confusion is important in realms other than stand-up comedy, including in the attempts to convey facts about the world to one’s target audience.

In the example above, Human #1 is using Phatic language, sometimes referred to as ‘small talk‘ and usually exemplified, at least in the British Isles, with the talk about the highly unpredictable weather. (image: by striatic on Flickr)

Phatic language

Phatic discourse is just one of several functions of language. Its role is not to impart any factual information, but to establish a relationship between the people. It conveys things like emotional state, relative social status, alliance, intentions and limits to further conversation (i.e., where the speaker “draws the line”).

If a stranger rides into a small town, a carefully chosen yet meaningless phrase establishes a state of mind that goes something like this: “I come in peace, mean no harm, I hope you accept me in the same way”. The response of the local conveys how the town looks at strangers riding in, for example: “You are welcome…for a little while – we’ll feed you and put you up for the night, but then we hope you leave”. (image: Clint Eastwood in ‘Fistful of Dollars’ from Squidoo)

An important component of phatic discourse is non-verbal communication, as the tone, volume and pitch of the voice, facial expression and body posture modify the language itself and confirm the emotional and intentional state of the speaker.

It does not seem that linguistics has an official term for the opposite – the language that conveys only pure facts – but the term usually seen in such discussions (including the domain of politics and campaigning) is “Conceptual language” so this is what I will use here. Conceptual language is what Human #2 in the joke above was assuming and using – just the facts, ma’am.

Rise of the earliest science and journalism

For the sake of this article, I will use two simplified definitions of science and journalism.

Journalism is communication of ‘what’s new’. A journalist is anyone who can say “I’m there, you’re not, let me tell you about it.”

Science is communication of ‘how the world works’. A scientist is anyone who can say “I understand something about the world, you don’t, let me explain it to you”.

Neither definition necessitates that what they say is True, just what they know to the best of their ability and understanding.

Note that I wrote “science is communication”. Yes, science is the process of discovery of facts about the way the world works, but the communication of that discovery is the essential last step of the scientific process, and the discoverer is likely to be the person who understands the discovery the best and is thus likely to be the person with the greatest expertise and authority (and hopefully ability) to do the explaining.

For the greatest part of human history, none of those distinctions made any sense. Most of communication contained information about what is new, some information about the way the world works, and a phatic component. Knowing how the world works, knowing what is happening in that world right now, and knowing if you should trust the messenger, were all important for survival.

For the most part, the information was local, and the messengers were local. A sentry runs back into the village alerting that a neighboring tribe, painted with war-paints, is approaching. Is that person a member of your tribe, or a stranger, or the well-known Boy Who Cried Wolf? What do you know about the meaning of war-paint? What do you know about the neighboring tribe? Does all this information fit with your understanding of the world? Is information coming from this person to be taken seriously? How are village elders responding to the news? Is this piece of news something that can aid in your personal survival?

For the longest time, information was exchanged between people who knew each other to some degree – family, neighbors, friends, business-partners. Like in a fishing village, the news about the state of fishing stocks coming from the ships at sea is important information exchanged at the local tavern. But is that fish-catch information ‘journalism’ (what’s new) or ‘science’ (how the world works)? It’s a little bit of both. And you learn which sailors to trust by observing who is trusted by the locals you have already learned to trust. Trust is transitive.

Someone in the “in-group” is trusted more than a stranger – kids learned from parents, the community elders had the authority: the trust was earned through a combination of who you are, how old you are, and how trustworthy you tended to be in the past. New messengers are harder to pin down on all those criteria, so their information is taken with a degree of skepticism. The art of critical thinking (again, not necessarily meaning that you will always pick the Truth) is an ancient one, as it was essential for day-to-day survival. You trust your parents (or priests or teachers) almost uncritically, but you put up your BS filters when hearing a stranger.

Emergence of science and of journalism

The invention of the printing press precipitated the development of both journalism and science. But that took a very long time – almost two centuries (image: 1851, printing press that produced early issues of Scientific American). After Gutenberg printed the Bible, most of what people printed were political pamphlets, church fliers and what for that time and sensibilities went for porn.

London Gazette of 1666 is thought to be the first newspaper in the modern sense of the word. (image: from DavidCo) Until then, newspapers were mostly irregular printings by individuals, combining news, opinion, fiction and entertainment. After this, newspapers gradually became regular (daily, weekly, monthly) collections of writings by numerous people writing in the same issue.

The first English scientific journal was published a year before – the Philosophical Transactions of the Royal Society of London in 1665 (image: Royal Society of London).

Until then, science was communicated by letters – those letters were often read at the meetings of scientists. Those meetings got formalized into scientific societies and the letters read at such meetings started getting printed. The first scientific journals were collections of such letters, which explains why so many journals have the words “Letters”, “Annals” or “Proceedings” in their titles.

Also, before as well as for a quite a long time after the inception of first journals, much of science was communicated via books – a naturalist would spend many years collecting data and ideas before putting it all in long-form, leather-bound form. Those books were then discussed at meetings of other naturalists who would often respond by writing books of their own. Scientists at the time did not think that Darwin’s twenty-year wait to publish The Origin was notable (William Kimler, personal communication) – that was the normal timeline for research and publishing at the time, unusual only to us from a modern perspective of 5-year NIH grants and the ‘publish or perish’ culture.

As previously oral communication gradually moved to print over the centuries, both journalistic and scientific communication occured in formats – printed with ink on paper – very similar to blogging (that link leads to the post that served as a seed from which this article grew). If born today, many of the old writers, like Montaigne, would be Natural Born Bloggers (‘NBBs’ – term coined by protoblogger Dave Winer). A lot of ship captains’ logs were essentially tweets with geolocation tags.

People who wanted to inform other people printed fliers and pamphlets and books. Personal letters and diaries were meant to be public: they were as widely shared as was possible, they were publicly read, saved, then eventually collected and published in book-form (at least posthumously). Just like blogs, tweets and Facebook updates today….

The 18th century ‘Republic of Letters’ (see the amazing visualization of their correspondence) was a social network of intellectual leaders of Europe who exchanged and publicly read their deep philosophical thoughts, scientific ideas, poetry and prose.

Many people during those centuries wrote their letters in duplicate: one copy to send, one to keep for publishing Collected Letters later in life. Charles Darwin did that, for example (well, if I remember correctly, his wife made copies from his illegible originals into something that recipients could actually read), which is why we have such a complete understanding of his work and thought – it is all well preserved and the availability of such voluminouos correspondence gave rise to a small industry of Darwinian historical scholarship.

What is important to note is that, both in journalism and in science, communication could be done by anyone – there was no official seal of approval, or licence, to practice either of the two arts. At the same time, communication in print was limited to those who were literate and who could afford to have a book printed – people who, for the most part, were just the wealthy elites. Entry into that intellectual elite from a lower social class was possible but very difficult and required a lot of hard work and time (see, for example, a biography of Alfred Russell Wallace). Membership in the worlds of arts, science and letters was automatic for those belonging to the small group of literate aristocracy. They had no need to establish formalized gatekeeping as bloodlines, personal sponsorship and money did the gatekeeping job quite well on their own.

As communication has moved from local to global, due to print, trust had to be gained over time – by one’s age, stature in society, track record, and by recommendation – who the people you trust say you should trust. Trust is transitive.

Another thing to note is that each written dispatch contained both ‘what’s new’ and ‘how the world works’ as well as a degree of phatic discourse: “This is what happened. This is what I think it means. And this is who I am so you know why you should trust me.” It is often hard to tell, from today’s perspective, what was scientific communication and what was journalism.

Personal – and thus potentially phatic – communication was a norm in the early scientific publishing. For example, see “A Letter from Mr J. Breintal to Peter Collinfoxl, F. RXS. contairnng an Account of what he felt after being bit by a Rattle-fnake” in Philosophical Transactions, 1747. – a great account of it can be found at Neurotic Physiology. It is a story of a personal interaction with a rattlesnake and the discovery leading from it. It contained “I was there, you were not, let me tell you what happened” and “I understand something, you don’t, let me explain that to you” and “Let me tell you who I am so you can know you can trust me”.

Apparently, quite a lot of scientific literature of old involved exciting narratives of people getting bitten by snakes – see this one from 1852 as well.

The anomalous 20th century – effects of technology

The gradual changes in society – invention of printing, rise of science, rise of capitalism, industrial revolution, mass migration from rural to urban areas, improvements in transportation and communication technologies, to name just a few – led to a very different world in the 20th century.

Technology often leads societal changes. If you were ever on a horse, you understand why armies that used stirrups defeated the armies that rode horses without this nifty invention.

Earlier, the speed of spreading news was much slower (see image: Maps of rates of travel in the 19th century – click on the link to see bigger and more). By 1860 Telegraph reached to St. Louis. During its short run the Pony Express could go the rest of the way to San Francisco in 10 days. After that, telegraph followed the rails. First transcontinental line was in 1869. Except for semaphores (1794) information before the telegraph (1843) could only travel as fast as a rider or boat (Thanks to John McKay for this brief primer on the history of speed of communication in Northern America. I am assuming that Europe was slightly ahead and the rest of the world somewhat behind).

The 20th century saw invention or improvement of numerous technologies in transportation – cars, fast trains, airplanes, helicopters, space shuttles – and in communication – telephone, radio, and television. Information could now travel almost instantly.

But those new technologies came with a price – literally. While everyone could write letters and send them by stagecoach, very few people could afford to buy, run and serve printing presses, radio stations and television studios. These things needed capital, and increasingly became owned by rich people and corporations.

Each inch of print or minute of broadcast costs serious money. Thus, people were employed to become official filters of information, the gatekeepers – the editors who decided who will get access to that expensive real estate. As the editors liked some people’s work better than others, those people got employed to work in the nascent newsrooms. Journalism became professionalized. Later, universities started journalism programs and codified instruction for new journalists, professionalizing it even more.

Instead of people informing each other, now the few professionals informed everyone else. And the technology did not allow for everyone else to talk back in the same medium.

The broadcast media, a few large corporations employing professional writers informing millions – with no ability for the receivers of information to fact-check, talk back, ask questions, be a part of the conversation – is an exception in history, something that lasted for just a few decades of the 20th century.

The anomalous 20th century – industrialization

Industrial Revolution brought about massive migration of people into big cities. The new type of work required a new type of workforce, one that was literate and more educated. This led to the invention of public schools and foundation of public universities.

In the area of science, many more people became educated enough (and science still not complex and expensive yet) to start their own surveys, experiments and tinkering. The explosion of research led to an explosion of new journals. Those too became expensive to produce and started requiring professional filters – editors. Thus scientific publishing also became professionalized. Not every personal anecdote could make it past the editors any more. Not everyone could call oneself a scientist either – a formal path emerged, ending with a PhD at a university, that ensured that science was done and published by qualified persons only.

By the 1960s, we got a mass adoption of peer-review by scientific journals that was experimentally done by some journals a little earlier. Yes, it is that recent! See for example this letter to Physical Review in 1936:

 

Dear Sir,

We (Mr. Rosen and I) had sent you our manuscript for publication and had not authorized you to show it to specialists before it is printed. I see no reason to address the — in any case erroneous — comments of your anonymous expert. On the basis of this incident I prefer to publish the paper elsewhere.

Respectfully,

Albert Einstein

Or this one:

 

John Maddox, former editor of Nature: The Watson and Crick paper was not peer-reviewed by Nature… the paper could not have been refereed: its correctness is self-evident. No referee working in the field … could have kept his mouth shut once he saw the structure…

Migration from small towns into big cities also meant that most people one would meet during the day were strangers. Meeting a stranger was not something extraordinary any more, so emergence and enforcement of proper proscribed conduct in cities replaced the need for one-to-one encounters and sizing up strangers using phatic language. Which is why even today phatic language is much more important and prevalent in rural areas where it aids personal survival than in urban centers where more general rules of behavior among strangers emerged (which may partially explain why phatic language is generally associated with conservative ideology and conceptual language with politicial liberalism, aka, the “reality-based community“).

People moving from small hometowns into big cities also led to breaking up of families and communities of trust. One needed to come up with new methods for figuring out who to trust. One obvious place to go was local media. They were stand-ins for village elders, parents, teachers and priests.

If there were many newspapers in town, one would try them all for a while and settle on one that best fit one’s prior worldview. Or one would just continue reading the paper one’s parents read.

But other people read other newspapers and brought their own worldviews into the conversation. This continuous presence of a plurality of views kept everyone’s BS filters in high gear – it was necessary to constantly question and filter all the incoming information in order to choose what to believe and what to dismiss.

The unease with the exposure to so many strangers with strange ideas also changed our notions of privacy. Suddenly we craved it. Our letters are now meant for one recepient only, with the understanding it will not be shared. Personal diaries now have lockets. After a century of such craving for privacy, we are again returning to a more historically traditional notions, by much more freely sharing our lives with strangers online.

The anomalous 20th century – cleansing of conceptual language in science and journalism

Until the 20th century we did not see the consolidation of media into large conglomerates, and of course, there were no mass radio or TV until mid-20th century. Not until later in the century did we see the monopolization of local media markets by a single newspaper (competitors going belly-up) which, then, had to serve everyone, so it had to invent the fake “objective” HeSaidSheSaid timid style of reporting in order not to lose customers of various ideological stripes and thus lose advertising revenue.

Professionalising of journalism, coupled with the growth of media giants serving very broad audiences, led to institutionalization of a type of writing that was very much limited to “what’s new”.

The “let me explain” component of journalism fell out of favor as there was always a faction of the audience that had a problem with the empirical facts – a faction that the company’s finances could not afford to lose. The personal – including phatic – was carefully eliminated as it was perceived as unobjective and inviting the criticism of bias. The way for a reporter to inject one’s opinion into the article was to find a person who thinks the same in order to get the target quote. A defensive (perhaps cowardly) move that became the norm. And, once the audience caught on, led to the loss of trust in traditional media.

Reduction of local media to a single newspaper, a couple of local radio stations and a handful of broadcast TV channels (that said esentially the same thing), left little choice for the audience. With only one source in town, there was no opportunity to filter among a variety of news sources. Thus, many people started unquestioningly accepting what 20th-century style broadcast media served them.

Just because articles were under the banners of big companies did not make them any more trustworthy by definition, but with no alternative it is still better to be poorly informed than not informed at all. Thus, in the 20th century we gradually lost the ability to read everything critically, awed by the big names like NYT and BBC and CBS and CNN. Those became the new parents, teachers, tribal elders and priests, the authority figures whose words are taken unquestioningly.

In science, explosion in funding not matched by explosion of job positions, led to overproduction of PhDs and a rise of hyper-competitive culture in academia. Writing books became unproductive. The only way to succeed is to keep getting grants and the only way to do that is to publish very frequently. Everything else had to fall by the wayside.

False measures of journal quality – like the infamous Impact Factor – were used to determine who gets a job and tenure and who falls out of the pipeline. The progress of science led inevitably to specialization and to the development of specialized jargon. Proliferation of expensive journals ensured that nobody but people in highest-level research institutions had access to the literature, so scientists started writing only for each other.

Scientific papers became dense, but also narrowed themselves to only “this is how the world works”. The “this is new” became left out as the audience already knew this, and it became obvious that a paper would not be published if it did not produce something new, almost by definition.

And the personal was so carefully excised for the purpose of seeming unbiased by human beings that it sometimes seems like the laboratory equipment did all the experiments of its own volition.

So, at the close of the 20th century, we had a situation in which journalism and science, for the first time in history, completely separated from each other. Journalism covered what’s new without providing the explanation and context for new readers just joining the topic. Science covered only explanation and only to one’s peers.

In order to bridge that gap, a whole new profession needed to arise. As scientists understood the last step of the scientific method – communication – to mean only ‘communication to colleagues’, and as regular press was too scared to put truth-values on any statements of fact, the solution was the invention of the science journalist – someone who can read what scientists write and explain that to the lay audience. With mixed success. Science is hard. It takes years to learn enough to be able to report it well. Only a few science journalists gathered that much expertise over the years of writing (and making mistakes on the way).

So, many science journalists fell back on reporting science as news, leaving the explanation out. Their editors helped in that by severely restricting the space – and good science coverage requires ample space.

A good science story should explain what is known by now (science), what the new study brings that is new (news) and why does that matter to you (phatic discourse). The lack of space usually led to omission of context (science), shortening of what is new (news) and thus leaving only the emotional story intact. Thus, the audience did not learn much, Certainly not enough to be able to evaluate next day’s and next week’s news.

This format also led to the choice of stories. It is easy to report in this way if the news is relevant to the audience anyway, e.g., concerning health (the “relevant” stories). It is also easy to report on misconduct of scientists (the “fishy” stories) – which is not strictly science reporting. But it was hard to report on science that is interesting for its own sake (the “cool” stories).

What did the audience get out of this? Scientists are always up to some mischief. And every week they change the story as to what is good or bad for my health. And it is not very fun, entertaining and exciting. No surprise that science as endeavour slowly started losing trust with the (American) population, and that it was easy for groups with financial, political or religious interests to push anti-science rhetoric on topics from hazards of smoking to stem-cell research to evolution to climate change.

At the end of the 20th century, thus, we had a situation in which journalism and science were completely separate endeavors, and the bridge between them – science journalism – was unfortunately operating under the rules of journalism and not science, messing up the popular trust in both.

Back to the Future

It is 2010. The Internet has been around for 30 years, the World Wide Web for 20. It took some time for the tools to develop and spread, but we are obviously undergoing a revolution in communication. I use the word “revolution” because it is so almost by definition – when the means of production change hands, this is a revolution.

The means of production, in this case the technology for easy, cheap and fast dissemination of information, are now potentially in the hands of everyone. When the people formerly known as the audience employ the press tools they have in their possession to inform one another, we call that ‘citizen journalism.’ And some of those citizens possess much greater expertise on the topics they cover than the journalists that cover that same beat. This applies to science as well.

In other words, after the deviation that was the 20th century, we are going back to the way we have evolved as a species to communicate – one-to-one and few-to-few instead of one-to-many. Apart from technology (software instead of talking/handwriting/printing), speed (microseconds instead of days and weeks by stagecoach, railroad or Pony Express, see image above) and the number of people reached (potentially – but rarely – millions simultaneously instead of one person or small group at a time), blogging, social networking and other forms of online writing are nothing new – this is how people have always communicated. Like Montaigne. And the Republic of Letters in the 18th century. And Charles Darwin in the 19th century.

All we are doing now is returning to a more natural, straightforward and honest way of sharing information, just using much more efficient ways of doing it. (Images from Cody Brown)

And not even that – where technology is scarce, the analog blogging is live and well (image: Analog blogger, from AfriGadget).

What about trustworthiness of all that online stuff? Some is and some isn’t to be trusted. It’s up to you to figure out your own filters and criteria, and to look for additional sources, just like our grandparents did when they had a choice of dozens of newspapers published in each of their little towns.

With the gradual return of a more natural system of communication, we got to see additional opinions, the regular fact-checks on the media by experts on the topic, and realized that the mainstream media is not to be trusted.

With the return of a more natural system of communication, we will all have to re-learn how to read critically, find second opinions, evaluate sources. Nothing new is there either – that is what people have been doing for millennia – the 20th century is the exception. We will figure out who to trust by trusting the judgment of people we already trust. Trust is transitive.

Return of the phatic language

What does this all mean for the future of journalism, including science journalism?

The growing number of Web-savvy citizens have developed new methods of establishing trustworthiness of the sources. It is actually the old one, pre-20th century method – relying on individuals, not institutions. Instead of treating WaPo, Fox, MSNBC and NPR as the proxies for the father, teacher, preacher and the medicine man, we now once again evaulate individuals.

As nobody enters a news site via the front page and looks around, but we all get to individual articles via links and searches, we are relying on bylines under the titles, not on the logos up on top. Just like we were not born trusting NYTimes but learned to trust it because our parents and neighbors did (and then perhaps we read it for some time), we are also not born knowing which individuals to trust. We use the same method – we start with recommendations from people we already trust, then make our own decisions over time.

If you don’t link to your sources, including to scientific papers, you lose trust. If you quote out of context without providing that context, you lose trust. If you hide who you are and where you are coming from – that is cagey and breeds mistrust. Transparency is the new objectivity.

And transparency is necessarily personal, thus often phatic. It shows who you are as a person, your background, your intentions, your mood, your alliances, your social status.

There are many reasons sciencebloggers are more trusted than journalists covering science.

First, they have the scientific expertise that journalists lack – they really know what they are talking about on the topic of their expertise and the audience understands this.

Second, they link out to more, more diverse and more reliable sources.

Third, being digital natives, they are not familiar with the concept of word-limits. They start writing, they explain it as it needs to be explained and when they are done explaining they end the post. Whatever length it takes to give the subject what it’s due.

Finally, not being trained by j-schools, they never learned not to let their personality shine through their writing. So they gain trust by connecting to their readers – the phatic component of communication.

Much of our communication, both offline and online, is phatic. But that is necessary for building trust. Once the trust is there, the conceptual communication can work. If I follow people I trust on Twitter, I will trust that they trust the sources they link to so I am likely to click on them. Which is why more and more scientists use Twitter to exchage information (PDF). Trust is transitive.

Scientists, becoming journalists

Good science journalists are rare. Cuts in newsrooms, allocation of too little space for science stories, assigning science stories to non-science journalists – all of these factors have resulted in a loss of quantity and quality of science reporting in the mainstream media.

But being a good science journalist is not impossible. People who take the task seriously can become experts on the topic they cover (and get to a position where they can refuse to cover astronomy if their expertise is evolution) over time. They can become temporary experts if they are given sufficient time to study instead of a task of writing ten stories per day.

With the overproduction of PhDs, many scientists are choosing alternative careers, including many of them becoming science writers and journalists, or Press Information Officers. They thus come into the profession with the expertise already there.

There is not much difference between a research scientist who blogs and thus is an expert on the topic s/he blogs about, and a research scientist who leaves the lab in order to write as a full-time job. They both have scientific expertise and they both love to write or they wouldn’t be doing it.

Blog is software. A medium. One of many. No medium has a higher coefficient of trustworthiness than any other. Despite never going to j-school and writing everything on blogs, I consider myself to be a science writer.

Many science journalists, usually younger though some of the old ones caught on quickly and became good at it (generation is mindset, not age), grok the new media ecosystem in which online collaboration between scientists and journalists is becoming a norm.

At the same time, many active scientists are now using the new tools (the means of production) to do their own communication. As is usually the case with novelty, different people get to it at different rates. The conflicts between 20th and 21st style thinking inevitably occur. The traditional scientists wish to communicate the old way – in journals, letters to the editor, at conferences. This is the way of gatekeeping they are used to.

But there have been a number of prominent cases of such clashes between old and new models of communication, including the infamous Roosevelts on toilets (the study had nothing to do with either US Presidents or toilets, but it is an instructive case – image by Dr.Isis), and several other smaller cases.

The latest one is the Arsenic Bacteria Saga in which the old-timers do not seem to undestand what a ‘blog’ means, and are seemingly completely unaware of the important distinction between ‘blogs’ and ‘scienceblogs’, the former being online spaces by just about anyone, the latter being blogs written by people who actually know their science and are vetted or peer-reviewed in some way e.g., at ResearchBlogging.org or Scienceblogging.org or by virtue of being hand-picked and invited to join one of the science blogging networks (which are often run by traditional media outlets or scientific publishers or societies) or simply by gaining resepect of peers over time.

Case by case, old-time scientists are learning. Note how both in the case of Roosevelts on toilets and the Arsenic bacteria the initially stunned scientists quickly learned and appreciated the new way of communication.

In other words, scientists are slowly starting to get out of the cocoon. Instead of just communicating to their peers behind the closed doors, now they are trying to reach out to the lay audience as well.

As more and more papers are Open Access and can be read by all, they are becoming more readable (as I predicted some years ago). The traditional format of the paper is changing. So they are covering “let me explain” portion better, both in papers and on their own blogs.

They may still be a little clumsy about the “what’s new” part, over-relying on the traditional media to do it for them via press releases and press conferences (see Darwinius and arsenic bacteria for good examples) instead of doing it themselves or taking control of the message (though they do need to rely on MSM to some extent due to the distinction between push and pull strategies as the media brands are still serving for many people as proxies for trustworthy sources).

But most importantly, they are now again adding the phatic aspect to their communication, revealing a lot of their personality on social networks, on blogs, and even some of them venturing into doing it in scientific papers.

By combining all three aspects of good communication, scientists will once again regain the trust of their audience. And what they are starting to do looks more and more like (pre-20th century) journalism.

Journalists, becoming scientists

On the other side of the divide, there is a renewed interest in journalism expanding from just “this is new” to “let me explain how the world works”. There are now efforts to build a future of context, and to design explainers.

If you are not well informed on an issue (perhaps because you are too young to remember when it first began, or the issue just started being relevant to you), following a stream of ‘what is new’ articles will not enlighten you. There is not sufficient information there. There is a lot of tacit knowledge that the writer assumes the readers possess – but many don’t.

There has to be a way for news items to link to some kind of collection of background information – an ‘explainer’. Such an explainer would be a collection of verifiable facts about the topic. A collection of verifiable facts about the way the world works is….scientific information!

With more and more journalists realizing they need to be transparent about where they are coming from, injecting personality into their work in order to build trust, some of that phatic language is starting to seep in, completing the trio of elements of effective communication.

Data Journalism – isn’t this science?

Some of the best journalism of the past – yes, the abominable 20th century – was done when a reporter was given several months to work on a single story requiring sifting through boxes and boxes of documents. The reporter becomes the expert on the topic, starts noticing patterns and writes a story that brings truly new knowledge to the world. That is practically science! Perhaps it is not the hardest of the hard sciences like physics, but as good as well-done social science like cultural anthropology, sociology or ethnography. There is a system and a method very much like the scientific method.

Unfortunately, most reporters are not given such luxury. They have to take shortcuts – interviewing a few sources to quote for the story. The sources are, of course, a very small and very unrepresentative sample of the relevant population – from a rolodex. Call a couple of climate scientists, and a couple of denialists, grab a quote from each and stick them into a formulaic article. That is Bad Science as well as Bad Journalism. And now that the people formerly known as audience, including people with expertise on the topic, have the tools to communicate to the world, they often swiftly point out how poorly such articles represent reality.

But today, most of the information, data and documents are digital, not in boxes. They are likely to be online and can be accessed without travel and without getting special permissions (though one may have to steal them – as Wikileaks operates: a perfect example of the new data journalism). Those reams of data can be analyzed by computers to find patterns, as well as by small armies of journalists (and other experts) for patterns and pieces of information that computer programs miss.

This is what bioinformaticists do (and have already built tools to do it – contact them, steal their tools!).

Data journalism. This is what a number of forward-thinking journalists and media organizations are starting to do.

This is science.

On the other hand, a lot of distributed, crowdsourced scientific research, usually called Citizen Science, is in the business of collecting massive amounts of data for analysis. How does that differ from data journalism? Not much?

Look at this scientific paper – Coding Early Naturalists’ Accounts into Long-Term Fish Community Changes in the Adriatic Sea (1800–2000) – is this science or data journalism? It is both.

The two domains of communicating about what is new and how the world works – journalism and science – have fused again. Both are now starting to get done by teams that involve both professionals and amateurs. Both are now led by personalities who are getting well-known in the public due to their phatic communication in a variety of old and new media.

It is important to be aware of the shortness of our lives and thus natural tendency for historical myopia. Just because we were born in the 20th century does not mean that the way things were done then are the way things were ‘always done’, or the best ways to do things – the pinnacle of cultural and social development. The 20th century was just a strange and deviant blip in the course of history.

As we are leaving the 20th century behind with all of its unusual historical quirks, we are going back to an older model of communicating facts – but with the new tools we can do it much better than ever, including a much broader swath of society – a more democratic system than ever.

By the way, while it’s still cold, the rain has stopped. And that is Metaphorical language…

This article was commissioned by Science Progress and will also appear on their site in 24 hours.

UC Berkeley Genetic Testing Affair: Science vs Science Education – guest post by Dr.Marie-Claire Shanahan

Marie-Claire Shanahan is an Assistant Professor of Science Education at the University of Alberta, in Edmonton, Alberta, Canada. As a former science teacher, she was always surprised by the ways that students talked themselves out of liking science – and she decided to do something about it. She now researches the social and cultural aspects of science and science education, especially those related to language and identity.

Marie-Claire and I first met online, then also in Real World when she attended ScienceOnline 2010, after which I interviewed her for my blog. You can check out her website and follow her on Twitter. Very interested in her scholarly work, I asked her if she would write a guest-post on one of her topics, and she very graciously agreed. Here is the post about the Berkeley genetic testing affair.

Outside of issues related to teaching evolution in schools, the words controversy and science education don’t often come into close contact with one another. It would be even rarer to be reporting on legislative intervention aimed at halting science education activities. So what’s going on with the UC Berkeley genetic testing affair?

News started to surface in May that Berkeley was going to be asking incoming first year and transfer students to send in a DNA swab. The idea was to stimulate discussion between students as part of the yearly On the Same Page program. A heated debate ensued that has ultimately lead to proposed state legislation that would bar California’s post secondary institutions from making unsolicited requests for DNA samples from students. Both the controversy and the legislation are excellently reported by Ferris Jabr at Scientific American here and here.

It would be reasonable to assume that this seems controversial because it involves genetic testing and therefore personal information. But is there more to it than that?

I chatted informally with some friends about the issue. One expressed her divided feelings about it saying (roughly quoted) “It seems like they [university admin] have addressed the ethical concerns well by being clear about the use of the swabs and the confidentiality but something still just doesn’t feel right. There’s still a part of me that shivers just a little bit.”

What is the shiver factor? Genetic testing and the idea that institutions might have access to our DNA do conjure some imaginative science fiction possibilities. So that could be causing the shivers. But from my perspective as a science education researcher, I think there’s also an underlying issue that makes this particular situation feel controversial: despite having science education goals, this looks and feels a lot more like science. That look and feel leads to confusion about how this initiative should be judged both from an ethical perspective and an educational one.

Science and science education are not the same thing (nor should they be). One way to think of them is through activity analysis, paying attention to who is involved, what are their objectives and what are the artefacts (e.g., tools, language, symbols), actions, and rules that those involved generally agree are used to accomplish the goals of the activity. Studies in activity theory emphasize the importance of shared understanding for accomplishing and progressing in any activity. I would argue that science and science education are different (though obviously related) activities. They have, in particular, different objectives and different artefacts, rules and actions that guide and shape them. As participants in one or the other (or both), teachers, parents, students, researchers, administrators have both tacit and explicit understandings of what each activity entails – what are the rules, the acceptable tools and practices and the appropriate language.

This is where the Berkeley project places itself in a fuzzy area. The objectives of the project are clearly stated to be educational. From the On the Same Page website: “we decided that involving students directly and personally in an assessment of genetic characteristics of personal relevance would capture their imaginations and lead to a deeper learning experience.” Okay, that sounds like the same reasons teachers and professors choose to do many activities. Sounds like science education.

But what about the tools? Testing students’ blood type or blood pressure uses tools commonly available in high school labs (or even at the drug store). The tools used here though are not commonly available – these samples are being sent to a laboratory for analysis. Participants don’t therefore have a shared perspective that these are the tools of education. They seem like the tools of science.

What about the language? One of the main publically accessible sources of information is the On the Same Page website, in particular an FAQ section for students. It starts with the questions: What new things are going on in the scientific community that make this a good time for an educational effort focused on personalized medicine? and Why did Berkeley decide to tackle the topic of Personalized Medicine? These are answered with appeals to educational discourse – to academic strengths, student opportunities, and the stature of Berkeley as an educational center. The agent or actor in the answers to these questions is the university as an educational institutional: “This type of broad, scholarly discussion of an important societal issue is what makes Berkeley special. From a learning perspective, our goal is to deliver a program that will enrich our students’ education and help contribute to an informed California citizenry.”

Beside these educational questions, however, are questions that are part of the usual language and processes of science: Will students be asked to provide “informed consent” for this test of their DNA? What about students who are minors? How can you assure the confidentiality and privacy of a student’s genetic information? What will happen to the data from this experiment? Has this project been approved by Berkeley’s Human Subjects Institutional Review Board? These questions are the questions that appear in human subjects information letters. They make this sound like this is science. The answers to these questions take a different perspective to the ones above. The technical terms are not educational ones but scientific ones. The actor in these responses is neither the educational institution nor the student as an educational participant but the student as a research object: “All students whether they are minors or not will be asked to provide informed consent. They will read and sign a detailed form describing exactly what will be done with their DNA sample, how the information will be used and secured for confidentiality, how this information might benefit them, and what the alternatives are to submitting a sample.”

Anyone who has done human subjects research will recognize this language is almost word for word from typical guidelines for informed consent documents. My consent forms usually don’t deal with DNA samples (usually something much less exotic, such as student writing or oral contributions during class) but the intent is the same. This language sets out the individuals under consideration as the objects of scientific research.

The overall effect is one of a mixed metaphor – is this research or is it teaching? Are the students actually acting in the role of students or are they the objects of research? What standards should we be using to judge if this is an appropriate action. The materials posted by UC Berkeley suggest that they believe this should be judged as an educational project. But the reaction of bioethicists and advocacy groups (such as the Council for Responsible Genetics) suggests that it be judged by research standards.

Why does it matter? Because the ethical considerations are different. As I said above, I don’t usually deal with any materials that would be considered very controversial. I research the way people (including students) write, read, speak and listen in situations related to science. When dealing with students, many of the activities that I use for research could also be used for educational purposes. For example, in a project this year I distributed different versions of scientific reading materials. I asked students to read these in pairs. I tape recorded their conversations and collected their written responses to the text. As a classroom teacher, these are strategies that I have used for educational purposes. Tape recording students allows me to listen to the struggles they might have had while reading a text. Collecting their written responses allows me to assess their understanding. Parents would not object to their child’s teacher using these tools for these purposes. When I visit a classroom as a researcher though, I am judged differently. Parents often do not consent to me collecting their children’s writing. They object, especially frequently, to my requests to videotape or photograph their children. This is because they rightfully understand educational research as a different activity from education. They use different judgments and expect different standards.

From the sequence of events, it sounds as if Berkeley admin started this project with their own perspective that this was clearly educational without adequate consideration that, from an outside position, it would be judged from a research perspective. I don’t want to suggest that this whole thing is a simple miscommunication because there are serious ethical implications related to asking for DNA samples. As people try to figure out how an educational idea ended up in the state legislature, though, I just wanted to add my perspective that some of the controversy might come from that shiver factor – something just doesn’t feel right. One aspect of that feel might be that this challenges the boundaries of our understanding of the activities of science and science education. The language and the tools and the objectives are mixed, leading to confusion about exactly what standards this should be judged against. As tools that have traditionally been associated with laboratory science become more accessible (as genetic testing is becoming) this boundary is likely to be challenged more and more. Those making the decisions to use these tools for educational, rather than research, purposes need to understand that challenging peoples conceptions of the boundaries between science and science education can and will lead to conflict and that conflict should be addressed head on and from the beginning.

Seven Questions….with Yours Truly

Last week, my SciBling Jason Goldman interviewed me for his blog. The questions were not so much about blogging, journalism, Open Access and PLoS (except a little bit at the end) but more about science – how I got into it, what are my grad school experiences, what I think about doing research on animals, and such stuff. Jason posted the interview here, on his blog, on Friday, and he also let me repost it here on my blog as well, under the fold:

Continue reading

Instead of your -80 freezer defrosting and ruining years of your research

Deposit it with people who guarantee your samples will remain frozen:

What is the real purpose of a graduate education in science? (video)

The new issue of Journal of Science Communication is now published

The new issue of Journal of Science Communication is now online (Open Access, so you can download all PDFs for free). Apart from the article on blogging that we already dissected at length, this issue has a number of interesting articles, reviews, perspectives and papers:
Users and peers. From citizen science to P2P science:

This introduction presents the essays belonging to the JCOM special issue on User-led and peer-to-peer science. It also draws a first map of the main problems we need to investigate when we face this new and emerging phenomenon. Web tools are enacting and facilitating new ways for lay people to interact with scientists or to cooperate with each other, but cultural and political changes are also at play. What happens to expertise, knowledge production and relations between scientific institutions and society when lay people or non-scientists go online and engage in scientific activities? From science blogging and social networks to garage biology and open tools for user-led research, P2P science challenges many assumptions about public participation in scientific knowledge production. And it calls for a radical and perhaps new kind of openness of scientific practices towards society.

Changing the meaning of peer-to-peer? Exploring online comment spaces as sites of negotiated expertise:

This study examines the nature of peer-to-peer interactions in public online comment spaces. From a theoretical perspective of boundary-work and expertise, the comments posted in response to three health sciences news articles from a national newspaper are explored to determine whether both scientific and personal expertise are recognized and taken up in discussion. Posts were analysed for both explicit claims to expertise and implicit claims embedded in discourse. The analysis suggests that while both scientific and personal expertise are proffered by commenters, it is scientific expertise that is privileged. Those expressing scientific expertise receive greater recognition of the value of their posts. Contributors seeking to share personal expertise are found to engage in scientisation to position themselves as worthwhile experts. Findings suggest that despite the possibilities afforded by online comments for a broader vision of what peer-to-peer interaction means, this possibility is not realized.

The public production and sharing of medical information. An Australian perspective:

There is a wealth of medical information now available to the public through various sources that are not necessarily controlled by medical or healthcare professionals. In Australia there has been a strong movement in the health consumer arena of consumer-led sharing and production of medical information and in healthcare decision-making. This has led to empowerment of the public as well as increased knowledge-sharing. There are some successful initiatives and strategies on consumer- and public-led sharing of medical information, including the formation of specialised consumer groups, independent medical information organisations, consumer peer tutoring, and email lists and consumer networking events. With well-organised public initiatives and networks, there tends to be fairly balanced information being shared. However, there needs to be caution about the use of publicly available scientific information to further the agenda of special-interest groups and lobbying groups to advance often biased and unproven opinions or for scaremongering. With the adoption of more accountability of medical research, and the increased public scrutiny of private and public research, the validity and quality of medical information reaching the public is achieving higher standards.

Social network science: pedagogy, dialogue, deliberation:

The online world constitutes an ever-expanding store and incubator for scientific information. It is also a social space where forms of creative interaction engender new ways of approaching science. Critically, the web is not only a repository of knowledge but a means with which to experience, interact and even supplement this bank. Social Network Sites are a key feature of such activity. This paper explores the potential for Social Network Sites (SNS) as an innovative pedagogical tool that precipitate the ‘incidental learner’. I suggest that these online spaces, characterised by informality, open-access, user input and widespread popularity, offer a potentially indispensable means of furthering the public understanding of science; and significantly one that is rooted in dialogue.

Open science: policy implications for the evolving phenomenon of user-led scientific innovation:

From contributions of astronomy data and DNA sequences to disease treatment research, scientific activity by non-scientists is a real and emergent phenomenon, and raising policy questions. This involvement in science can be understood as an issue of access to publications, code, and data that facilitates public engagement in the research process, thus appropriate policy to support the associated welfare enhancing benefits is essential. Current legal barriers to citizen participation can be alleviated by scientists’ use of the “Reproducible Research Standard,” thus making the literature, data, and code associated with scientific results accessible. The enterprise of science is undergoing deep and fundamental changes, particularly in how scientists obtain results and share their work: the promise of open research dissemination held by the Internet is gradually being fulfilled by scientists. Contributions to science from beyond the ivory tower are forcing a rethinking of traditional models of knowledge generation, evaluation, and communication. The notion of a scientific “peer” is blurred with the advent of lay contributions to science raising questions regarding the concepts of peer-review and recognition. New collaborative models are emerging around both open scientific software and the generation of scientific discoveries that bear a similarity to open innovation models in other settings. Public engagement in science can be understood as an issue of access to knowledge for public involvement in the research process, facilitated by appropriate policy to support the welfare enhancing benefits deriving from citizen-science.

Googling your genes: personal genomics and the discourse of citizen bioscience in the network age:

In this essay, I argue that the rise of personal genomics is technologically, economically, and most importantly, discursively tied to the rise of network subjectivity, an imperative of which is an understanding of self as always already a subject in the network. I illustrate how personal genomics takes full advantage of social media technology and network subjectivity to advertise a new way of doing research that emphasizes collaboration between researchers and its members. Sharing one’s genetic information is considered to be an act of citizenship, precisely because it is good for the network. Here members are encouraged to think of themselves as dividuals, or nodes, in the network and their actions acquire value based on that imperative. Therefore, citizen bioscience is intricately tied, both in discourse and practices, to the growth of the network in the age of new media.

Special issue on peer-to-peer and user-led science: invited comments:

In this commentary, we collected three essays from authors coming from different perspectives. They analyse the problem of power, participation and cooperation in projects of production of scientific knowledge held by users or peers: persons who do not belong to the institutionalised scientific community. These contributions are intended to give a more political and critical point of view on the themes developed and analysed in the research articles of this JCOM special issue on Peer-to-peer and user-led science.
Michel Bauwens, Christopher Kelty and Mathieu O’Neil write about different aspects of P2P science. Nevertheless, the three worlds they delve into share the “aggressively active” attitude of the citizens who inhabit them. Those citizens claim to be part of the scientific process, and they use practices as heterogeneous as online peer-production of scientific knowledge, garage biology practiced with a hacker twist, or the crowdsourced creation of an encyclopedia page. All these claims and practices point to a problem in the current distribution of power. The relations between experts and non-experts are challenged by the rise of peer-to-peer science. Furthermore, the horizontal communities which live inside and outside the Net are not frictionless. Within peer-production mechanisms, the balance of power is an important issue which has to be carefully taken into account.

Is there something like a peer to peer science?:

How will peer to peer infrastructures, and the underlying intersubjective and ethical relational model that is implied by it, affect scientific practice? Are peer-to-peer forms of cooperation, based on open and free input of voluntary contributors, participatory processes of governance, and universal availability of the output, more productive than centralized alternatives? In this short introduction, Michel Bauwens reviews a number of open and free, participatory and commons oriented practices that are emerging in scientific research and practice, but which ultimately point to a more profound epistemological revolution linked to increased participatory consciousness between the scientist and his human, organic and inorganic research material.

Outlaw, hackers, victorian amateurs: diagnosing public participation in the life sciences today:

This essay reflects on three figures that can be used to make sense of the changing nature of public participation in the life sciences today: outlaws, hackers and Victorian gentlemen. Occasioned by a symposium held at UCLA (Outlaw Biology: Public Participation in the Age of Big Bio), the essay introduces several different modes of participation (DIY Bio, Bio Art, At home clinical genetics, patient advocacy and others) and makes three points: 1) that public participation is first a problem of legitimacy, not legality or safety; 2) that public participation is itself enabled by and thrives on the infrastructure of mainstream biology; and 3) that we need a new set of concepts (other than inside/outside) for describing the nature of public participation in biological research and innovation today.

Shirky and Sanger, or the costs of crowdsourcing:

Online knowledge production sites do not rely on isolated experts but on collaborative processes, on the wisdom of the group or “crowd”. Some authors have argued that it is possible to combine traditional or credentialled expertise with collective production; others believe that traditional expertise’s focus on correctness has been superseded by the affordances of digital networking, such as re-use and verifiability. This paper examines the costs of two kinds of “crowdsourced” encyclopedic projects: Citizendium, based on the work of credentialled and identified experts, faces a recruitment deficit; in contrast Wikipedia has proved wildly popular, but anti-credentialism and anonymity result in uncertainty, irresponsibility, the development of cliques and the growing importance of pseudo-legal competencies for conflict resolution. Finally the paper reflects on the wider social implications of focusing on what experts are rather than on what they are for.

The unsustainable Makers:

The Makers is the latest novel of the American science fiction writer, blogger and Silicon Valley intellectual Cory Doctorow. Set in the 2010s, the novel describes the possible impact of the present trend towards the migration of modes of production and organization that have emerged online into the sphere of material production. Called New Work, this movement is indebted to a new maker culture that attracts people into a kind of neo-artisan, high tech mode of production. The question is: can a corporate-funded New Work movement be sustainable? Doctorow seems to suggest that a capitalist economy of abundance is unsustainable because it tends to restrict the reach of its value flows to a privileged managerial elite.

Aves 3D

Aves 3D is a ‘three dimensional database of avian skeletal morphology’ and it is awesome!
Aves3D logo.pngThis is an NSF-funded project led by Leon Claessens, Scott Edwards and Abby Drake. What they are doing is making surface scans of various bones of different bird species and placing the 3D scans on the website for everyone to see and use. With simple use of the mouse or arrow buttons, one can move, zoom and rotate each image any way one wants.
The collection is growing steadily and already contains some very interesting bones from a number of species, both extinct and extant. You can see examples of bones of the dodo or the Diatryma gigantea (aka Gaston’s Bird), as well as many skulls and sternums and various limb bones of currently existing species.
The database is searchable by
Cladogram, Scientific Name, Common Name, Skeletal Element, geological era, Geographical Location or Specimen Number.
Most of the actual scanning is done by undergraduate students and the database is already being use for several scientific projects. You can get involved and help build the database, you can use the scans for teaching and research, or you can just go and have fun rotating the cool-looking bird bones.

Frontiers of Knowledge Award goes to Robert J. Lefkowitz for G-protein coupled receptors

I had a good fortune to hear Dr. Lefkowitz speak once. Great guy. From the press release:

The prestigious BBVA Foundation Frontiers of Knowledge Award in the Biomedicine category goes this year to Robert J. Lefkowitz, MD, James B. Duke Professor of Medicine and Biochemistry and a Howard Hughes Medical Institute (HHMI) investigator at Duke University Medical Center.
This is only the second year the award has been given.
Dr. Lefkowitz’s research has affected millions of cardiac and other patients worldwide. Lefkowitz proved the existence of, isolated, characterized and still studies G-protein-coupled receptors (GPCRs).
The receptors, which are located on the surface of the membranes that surround cells, are the targets of almost half of the drugs on the market today, including beta blockers for heart disease, antihistamines and ulcer medications.
Lefkowitz, a Duke faculty member since 1973, also investigates related enzymes, proteins, and signaling pathways and continues to learn all he can about these pivotal receptors.
“I am surprised, delighted and honored by the award, and am honored to be in the company of Joan Massagué, a fellow HHMI investigator who won last year,” said Lefkowitz, who is also a Duke professor of immunology and a basic research cardiologist in the Duke Heart Center.
“While it is a relatively new award, I know it is a very distinguished award, and I am delighted to be the recipient.”
The BBVA Frontiers of Knowledge Award in Biomedicine provides the winner a cash prize of 400,000 euros (about $563,400). The award, organized by the BBVA Foundation in partnership with Spain’s National Research Council, was announced at 11 a.m. Madrid time on Jan. 27.
Dr. Lefkowitz is being awarded the prize for the work he has done since the beginning of his career and includes his ongoing studies of GPCRs and other key receptors.
His research group first identified, purified, and cloned the genes for these receptors in the 1970s and 1980s, and revealed the structure of the receptors as well as their functions and regulation. This work facilitated and fundamentally altered the way in which numerous therapeutic agents have been developed.
Lefkowitz is also extremely proud of his mentoring work and of the students and fellows he has worked with over the years, many of whom have gone on to run successful laboratories and uncover their own discoveries about GPCRs and other receptors.
The Biomedicine Award honors contributions that significantly advance the stock of knowledge in the biomedicine field because of their importance and originality.
The BBVA Foundation Frontiers of Knowledge Awards seek to recognize and encourage world-class research at the international level, and are similar to the Nobel Prizes, with an annual total of 3.2 million euros given to deserving winners, because of the breadth of the scientific and artistic areas they have covered during their careers.

What is ‘Investigative Science Journalism’?

Background
When Futurity.org, a new science news service, was launched last week, there was quite a lot of reaction online.
Some greeted it with approval, others with a “wait and see” attitude.
Some disliked the elitism, as the site is limited only to the self-proclaimed “top” universities (although it is possible that research in such places, where people are likely to be well funded, may be the least creative).
But one person – notably, a journalist – exclaimed on Twitter: “propaganda!”, which led to a discussion that revealed the journalist’s notion that press releases are automatically suspect and scientists are never to be trusted and their institutions even less. That was a very anti-science sentiment from a professional science journalist, some of us thought.
This exchange reminded me of a number of prior debates between the traditional Old Media journalists and the modern New Media journalists about the very definition of ‘journalism’. The traditional journalists are fighting to redefine it in a narrowest possible way that keeps them in a position of gatekeepers (like the new proposed shield law that defines a journalist as someone who gets paid by the Old Media organization, thus NOT protecting citizen journalists, accidental journalists, bloggers, etc.), while the new ones are observing the way the world is changing and trying to come up with new definitions that better reflect the world (and often go too far in the other direction – defining everything broadcast by anyone via any medium to the audience consisting of more than one person as journalism, including the crossword puzzle in a newspaper and the silliest YouTube video).
One of the frequently heard retorts in the “you’ll miss us when we’re gone” genre of defensiveness by the old guard is the slight-of-hand in which they suddenly, in mid-stream of the discussion, redefine journalism to equate only investigative journalism. This usually comes up in the form of “who will report from the school board meetings” question (to which the obvious answer is: “actually, the bloggers are already doing it a lot as the old media has quit decades ago”).
Of course, investigative journalism is just one of many forms under the rubric of ‘journalism’. And, if you actually go and buy a copy of your local newspaper today (it still exists in some places, on tree-derived paper, believe me), you are likely to find exactly zero examples of investigative journalism in it. Tomorrow – the same. Every now and then one appears in the paper, and then it is often well done, but the occasions are rare and getting even more rare as investigative reporters have been cut from many a newsroom over the past few decades, and even more rapidly over the last several months.
So, what is ‘Investigative Science Journalism’?
So, this train of thought brought me to the question, again, of what is ‘investigative journalism’ in science. And I was not perfectly happy with what I wrote about this question before. I had to think some more. But before doing all the thinking myself, I thought I’d try to see what others think. So I tweeted the question in several different ways and got a lot of interesting responses:

Me: What is, exactly, ‘investigative science reporting’?

@davemunger: @BoraZ To me, it means going beyond looking at a single study to really understand a scientific concept. Diff from traditional “inv. journo”

@szvan: @davemunger @BoraZ And looking at methodology, statistical analysis, etc. to determine whether claims made match what was studied.

@LeeBillings: @BoraZ Re: “investigative science reporting,” isn’t it like all other investigative reporting where you dig deep and challenge your sources?

@Melhi: @BoraZ I thnk it means, “we cut/pasted from Wiki, all by ourselves.” Seems to be what it means when “scientific” is removed from the term.

Me: @LeeBillings clarify: What’s the story about? dig deep into what? who are the sources? why are you assuming they need to be challenged?

@soychemist: @BoraZ Any instance in which a reporter tries to uncover scientific information that has been concealed or distorted, using rigorous methods

@john_s_wilkins: @BoraZ Reporting on investigative science, no doubt.

@LeeBillings: @BoraZ ?s you’re asking only make sense in context of a specific story, not in context of defining “sci investigative journalism” as a whole

@LeeBillings: @BoraZ 1/2 but typically, the goal is to find out what’s true, and communicate it. you dig into primary literature & interview tons of ppl

@LeeBillings: @BoraZ 2/2 you don’t assume they need to challenged. you *know* they need to be challenged based on your in-depth research into primary lit

Me: When futurity.org was released, a journo yelled “propaganda”! Does every press release need to be investigated? Challenged?

Me: Are scientists presumed to be liars unless proven otherwise? All of them?

@NerdyChristie: Usually. Unless you’re studying how herbal tea makes you a supergod. RT @BoraZ: Are scientists presumed to be liars unless proven otherwise?

@szvan: @BoraZ Not liars but not inherently less open to bias than anyone else. Some wrongs are lies. Some are errors.

Me: Are journalists capable of uncovering scientific misconduct at all? All of those were uncovered by other scientists, I recall…

@lippard: @BoraZ Didn’t journalist Brian Deer do the investigative work to expose Andrew Wakefield’s MMR-autism data manipulation?

@JATetro: @BoraZ To be honest, there are some very good journalists out there who can spot misconduct but without backing from a source, it’s liable.

Me: @BoraZ: @JATetro yes, they need scientists to do the actual investigating, then report on what scientists discovered – fraud, plagiarism etc.

@JATetro: @BoraZ So it’s not the journalists fault, really. They do their job as well as possible but without our help, there’s little they can do.

@LabSpaces: @JATetro @BoraZ Actual scientists cost too much.They’re a luxury, and especially in these times, it’s hard for pubs. to justify having 1

@JATetro: @LabSpaces @BoraZ Apparently it’s hard for universities to have them as well…not a prof or anything but damn it’s ridiculous.

@LabSpaces: @JATetro @BoraZ I dunno, our PR dept. does a great job interacting with scientists and getting the right info out, but I guess that’s diff.

@JATetro: @LabSpaces @BoraZ Oh, the media people at the U are great. It’s the administrators that seem to forget who keep the students comin’.

Me: Isn’t investigating nature, via experimentation, and publishing the findings in a journal = scientific investigative reporting?

@LeeBillings: @BoraZ 1/2 I’d say that’s performing peer-reviewed scientific research, not doing investigative science journalism.

@LeeBillings: @BoraZ 2/2 No room to address your ?-torrent. What are you driving at, anyway? You think sci journos can’t/don’t do investigative stuff?

@LouiseJJohnson: RT @BoraZ Isn’t investigating nature, via experimentation, & publishing findings in a journal, scientific investigative reporting?

@mcmoots: @BoraZ “Journalism” usually means you report the results of your investigations to the public; scientists report to a technical community.

Me: @BoraZ: @mcmoots does the size and expertise of audience determine what is journalism, what is not? Is it changing these days?

Me: @BoraZ: Why is investigating words called ‘investigative journalism’, but investigating reality, with much more rigorous methods, is not?

@LeeBillings: @BoraZ 1 more thing: A press release isn’t a story–it should inspire journos to look deeper. Sometimes that deeper look reveals PR to be BS

Me: @BoraZ: @LeeBillings Journal article is reporting findings of investigation. Press release is 2ndary. Journo article is 3tiary. Each diff audience.

@LeeBillings: @BoraZ Glad you raised ? of audience, since relevant to yr ? of “words” & “reality.” Words make reality for audiences, some more than others

Me: @BoraZ: Journos investigate people, parse words. Scientists investigate nature. What is more worthy?

@lippard: @BoraZ I would say that there are instances of investigative journalism that have had more value than some instances of scientific research.

Me: @BoraZ: @lippard possible, but that is investigating the rare instances of misconduct by people, not investigating the natural reality. Science?

@john_s_wilkins: @BoraZ You’re asking this of a profession that thinks it needs to “give the other side” when reporting on science, i.e., quacks

@LeeBillings: @BoraZ Twitter is useful tool, but probably not best way to interview for the story you seem to be after, as responses lack depth and nuance

@LeeBillings: @BoraZ Still looking forward to reading your resulting story, of course

Me: @BoraZ: @LeeBillings you can add longer responses on FriendFeed: http://friendfeed.com/coturnix that’s what it’s for

@1seahorse1: @BoraZ Do you mean that I have to be nostalgic about my ape tribe and life in caves ? :-)

@TyeArnett: @BoraZ parsing data can be as dangerous as parsing words sometimes

@ccziv: @BoraZ Do not underestimate or devalue the importance of words, ever.

This shows that different people have very different ideas what ‘investigative reporting’ is and have even more difficulty figuring out how that applies to science! Let’s go nice and slow now, explore this a little bit more.
First, I think that what Dave meant in his first tweet -

@davemunger: @BoraZ To me, it means going beyond looking at a single study to really understand a scientific concept. Diff from traditional “inv. journo”

- is not ‘investigative reporting’ but ‘news analysis’ (again, see my attempt at classification), something akin to ‘explainers’ done occasionally by the mainstream media (think of This American Life on NPR and their ‘Giant Pool of Money‘ explainer for a great recent example). It is an equivalent of a Review Article in a scientific journal, but aimed at a broader audience and not assuming existing background knowledge and expertise.
The different worlds of journalists and scientists
This discussion, as well as many similar discussions we had in the past, uncovers some interesting differences between the way journalists and scientists think about ‘investigative’ in the context of reporting.
Journalists, when investigating, investigate people, almost exclusively. Scientists are much more open to including other things under this rubric, as they are interested in investigating the world.
Journalists focus almost entirely on words, i.e., what people say. In other words, they are interested mainly on the process and what the words reveal as to who is winning and who is losing in some imaginary (or sometimes real) game. Scientists are interested in results of the process, obtained by any means, only one of which is through people’s utterances – they are interested in investigating and uncovering the facts.
Journalists display an inordinate amount of skepticism – even deep cynicism – about anyone’s honesty. Everyone’s a liar unless proven not to be. Scientists, knowing themselves, knowing their colleagues, knowing the culture of science where 100% honesty and trust are the key, knowing that exposure of even the tiniest dishonesty is likely The End of a scientific career, tend to trust scientists a great deal more. On the other hand, scientists are deeply suspicious of people who do not abide by high standards of the scientific community, and The List of those who, due to track record, should be mistrusted the most is topped by – journalists.
This explains why scientists generally see Futurity.org as an interesting method of providing scientific information to the public, assuming a priori, knowing the track record of these institutions and what kind of reputation is at stake, that most or all of it will be reliable, while a journalist exclaims “propaganda”.
The Question of Trust
In this light, it is very instructive to read this post by a young science journalist, and the subsequent FriendFeed discussion of it. It is difficult for people outside of science to understand who is “inside” and thus to be trusted and who is not.
Those on the “inside”, the scientists, are already swimming in these waters and know instantly who is to be trusted and who not. Scientists know that Lynn Margolis was outside (untrusted) at first, inside (trusted) later and outside (untrusted) today again. Scientists know that James Lovelock or Deepak Chopra or Rupert Shaldrake are outside, always were and always will be, and are not to be trusted. Journalists can figure this out by asking, but then they need to figure out whose answer to trust! Who is inside and trusted to say who else is inside and trusted? If your first point of entry is the wrong person, all the “sources” you interview will be wackos.
Unfortunately the mistrust by journalists is often ‘schematic’ – not based on experience or on investigating the actual facts. They have a schema in their minds as to who is likely to lie, who is likely to use weaselly language, who can generally be trusted, etc. They use this rule-of-thumb when interviewing criminals, corrupt cops (“liars”), politicians, lawyers, CEOs (“weaselly words”), other journalists (“trustworthy”) and yes, scientists (“suspicious pointy-heads with hard-to-uncover financial motives”).
The automatic use of such “rule” is why so many D.C. reporters (so-called Village) did not understand (and some still do not understand) that someone who is supposed to be in the “use weaselly language” column – the politicians – should actually have been in the “lying whenever they open their mouths” column for eight years of the Bush rule (or, to be fair, the last 30 years). It did not occur to them to fact-check what Republicans said and hastily move them to the appropriate “chronic liars” category and report appropriately. They could not fathom that someone like The President would actually straight-out lie. Every sentence. Every day. Nobody likes being shown to be naive, but nobody likes being lied to either. Their need for appearance of savviness (the opposite of naive), for many of them, over-rode the need to reveal they’ve been lied to and fell for it (“What are you saying? Can’t be possible. They are such nice guys when they pat my back at a cocktail party over in The Old Boys Club Cafe – they wouldn’t lie to me!”). And many in their audience are in the same mindset – finding it impossible (as that takes courage and humility) to admit to themselves that they were so naive they fell for such lies from such high places (both the ruling party and their loyal stenographers). And we all suffered because of it.
The heavy reliance on such rules or mental schemas by journalists is often due to their self-awareness about the lack of knowledge and expertise on the topic they are covering. They just don’t know who to trust, because they are not capable of uncovering the underlying facts and thus figure out for themselves who is telling the truth and who is lying (not to mention that this would require, gasp, work instead of hanging out at cocktail parties). To cover up the ignorance and make it difficult for it to be revealed by the audience, they strongly resist the calls to provide the links to more information and especially to their source documents.
Thus He Said She Said journalism is a great way for them to a) focus on words, people, process and ‘horse-race’ instead of facts, b) hide their ignorance of the underlying facts, c) show their savvy by “making both side angry” which, in some sick twist, they think means they are doing a good job (no, that means all readers saw through you and are disgusted by your unprofessionalism). Nowhere does that show as clearly as when they cover science.
A more systematic investigation into ‘investigation’
Now that I raised everyone’s ire, let me calm down again and try to use this blog post the way bloggers often do – as a way to clarify thoughts through writing. I am no expert on this topic, but I am interested, I read a lot about it, blog about it a lot, and want to hear the responses in the comments. Let me try to systematize what I think ‘investigative reporting’ is in general and then apply that to three specific cases: 1) a scientist investigating nature and reporting about it in a journal, 2) a journalist investigating scientists and their work and reporting about it in a media outlet, and 3) a science blogger investigating the first two and reporting how good or bad job each one of them did.
A few months ago, I defined ‘investigative journalism’ like this:

Investigative reporting is uncovering data and information that does not want to be uncovered.

Let’s see how that works in practice.
Steps in Investigative Reporting:
1) Someone gets a hunch, wiff, a tip from someone or an intuition (or orders from the boss to take a look) that some information exists that is hidden from the public.
2) That someone then uses a whole suit of methods to discover that secret information, often against the agents that resist the idea of that information becoming available to the public.
3) That someone then puts all of the gathered information in one place and looks for patterns, overarching themes, connections and figures out what it all means.
4) That someone then writes an article, with a specific audience in mind, showing to the public the previously secret information (often including all of it – the entire raw data sets or documents or transcripts) and explaining what it means.
5) That someone then sends the article to the proper venue where it undergoes an editorial process.
6) If accepted for publication, the article gets published.
7) The article gets a life of its own – people read (or listen/view) it, comment, give feedback, or follow up with investigation digging up more information that is still not public (so the cycle repeats).
Case I: Scientist
1) Someone gets a hunch, wiff, a tip from someone or an intuition (or orders from the boss to take a look) that some information exists that is hidden from the public.
The keeper of the secret information is Nature herself. The researcher can get a hunch about the existence of hidden information in several different ways:
- delving deep into the literature, it becomes apparent that there are holes – missing information that nobody reported on yet, suggesting that nobody uncovered it yet.
- doing research and getting unexpected results points one to the fact that there is missing information needed to explain those funky results.
- going out into nature and observing something that, upon digging through the literature, one finds has not been explained yet.
- getting a photocopy of descriptions of three experiments from the last grant proposal from your PI with the message “Do this”. Great method for introducing high school and undergraduate students into research, and perhaps to get a brand new Masters student started (of course, regular discussions of the progress are needed). Unfortunately, some PIs continue doing this to their PhD students and even postdocs, instead of giving them freedom of creativity.
2) That someone then uses a whole suit of methods to discover that secret information, often against the agents that resist the idea of that information becoming available to the public.
The scientific method includes a variety of methods for wresting secret information out of Nature: observations, experiments, brute-force Big Science, natural experiments, statistics, mathematical modeling, etc. It is not easy to get this information from Nature as she resists. One has to be creative and innovative in designing tricks to get reliable data from her.
3) That someone then puts all of the gathered information in one place and looks for patterns, overarching themes, connections and figures out what it all means.
All the collected data from a series of observations/experiments are put together, statistically analyzed, visualized (which sometimes leads to additional statistical analyses as visualization may point out phenomena not readily gleaned from raw numbers) and a common theme emerges (if it doesn’t – more work needs to be done).
4) That someone then writes an article, with a specific audience in mind, showing to the public the previously secret information (often including all of it – the entire raw data sets or documents or transcripts) and explaining what it means.
There are three potential audiences for the findings of the research: experts in one’s field, other scientists, and lay audience (which may include policy-makers or political-action organizations, or journalists, or teachers, or physicians, etc.).
The experts in one’s field are the most important audience for most of research. The proper venue to publish for this audience is a scientific journal of a narrow scope (usually a society journal) that is read by all the experts in the same field. The article can be dense, using the technical lingo, containing all the information needed for replication and further testing of the information and should, in principle, contain all the raw data.
The scientific community as a whole as the target audience is somewhat baffling – on one hand, some of them are also experts in the field, on the other hand, all the rest are essentially lay audience. It is neither-nor. Why target scientific community as an audience then? Because the venue for this are GlamourMagz and publishing in these is good for one’s career and fame. The format in which such papers are written is great for scientists in non-related disciplines – it tells a story, but it is extremely frustrating for same-field researchers as there is not sufficient detail (or data) to replicate, re-test or follow-up on the described research. Publishing this way makes you known to a lot more scientists, but tends to alienate your closests colleagues who are frustrated by the lack of information in your report.
The lay audience is an important audience for some types of research – ones that impact people’s personal decisions about their health or about taking care of the environment, ones that can have impact on policy, ones that are useful to know by health care providers or science educators, or ones that are so cool (e.g., new fossils of dinosaurs or, erm…Ida) that they will excite the public about science.
Many scientists are excellent and exciting communicators and can speak directly to the audience (online on blogs/podcasts/videos or offline in public lectures or science cafes), or will gladly accept to do interviews (TV, radio, newspapers, magazines) about their findings. Those researchers who know they are not exciting communicators, or do not like to be in public, or are too busy, or have been burned by the previous interactions with the media, tend to leave the communication to lay audience to professionals – the press officers at their institutions.
While we have all screamed every now and then at some blatantly bad press releases (especially the titles imposed by the editors), there has been generally a steady, gradual improvement in their quality over the years. One of the possible explanations for this is that scientists that fall out of the pipeline as there are now so many PhDs and so few academic jobs, have started replacing English majors and j-school majors in these positions. More and more institutions now have science-trained press officers who actually understand what they are writing about. Thus, there is less hype yet more and better explanation of the results of scientific investigation. Of course, they tend to be excellent writers as well, a talent that comes with love and practice and does not necessitate a degree in English or Communications.
5) That someone then sends the article to the proper venue where it undergoes an editorial process.
The first draft of the article is usually co-written and co-edited by a number of co-authors who “peer-review” each other during the process. That draft is then (2nd peer-review) usually given to other lab-members, collaborators, friends and colleagues to elicit their opinion. Their feedback is incorporated into the improved draft which is then sent to the appropriate scientific journal where the editor sends the manuscript to anywhere between one and several experts in the field, usually kept anonymous, for the 3rd (and “official”) peer-review. This may then go through two or three cycles before the reviewers are satisfied with the edits and changes and recommend to the editor that the paper be published (or not, in which case the whole process gets repeated at lesser and lesser and lesser journals…until the paper is either finally published or abandoned or self-published on a website).
6) If accepted for publication, the article gets published.
Champaign time!
Then, next morning, back to the lab – trying to uncover more information.
7) The article gets a life of its own – people read (or listen/view) it, comment, give feedback, or follow up with investigation digging up more information that is still not public (so the cycle repeats).
After Nature closely guarded her secrets for billions of years, and after intrepid investigators snatched the secret information from her over weeks, months, years or decades of hard and creative work, the information is finally made public. The publication date is the date of birth for that information, the moment when its life begins. Nobody can predict what kind of life it will have at that point. It takes years to watch it grow and develop and mature and spawn.
People download it and read it, think about it, talk about it, interact with it, blog about it and, most importantly, try to replicate, re-test and follow up on the information in order to uncover even more information.
If that is not ‘investigative reporting’ at its best, I don’t know what is.
Case II: Science Journalist
1) Someone gets a hunch, wiff, a tip from someone or an intuition (or orders from the boss to take a look) that some information exists that is hidden from the public.
The hidden information, in this case, is most likely to be man-made information – documents, human actions, human words. It is especially deemed worthy of investigation if some wrong-doing is suspected.
2) That someone then uses a whole suit of methods to discover that secret information, often against the agents that resist the idea of that information becoming available to the public.
As the journalist cannot “go direct” and investigate nature directly (not having the relevant training, expertise, infrastructure, funding, manpower, equipment, etc.), the only remaining method is to investigate indirectly. The usual indirect method for journalists is to ask people – a very, very, very unreliable way of getting information.
Since investigating the facts about nature is outside the scope of expertise of journalists, they usually investigate the behavior and conduct of scientists. This is “investigative meta-science reporting”. In a sense, there is not much difference between investigating potential misconduct of scientists and misconduct of any other group of people. The main difference is that the business of science is facts about the way the world works, thus knowing who got the facts right and who got the facts wrong is important and who misrepresents lies as facts is even more important.
Unfortunately, due to lack of scientific expertise, journalists find this kind of investigation very difficult – they have to rely on the statements of scientists as to the veracity of other scientists’ facts or claims – something they are not in position to verify directly. If they ask the wrong person – a quack, for example – they will follow all the wrong leads.
Thus, the usual fall-back is HeSaidSheSaid model of journalism, reporting who said what, not committing to any side, not evaluating truth-claims of any side, and hoping that (also science-uneducated) audience will be able to figure it out for itself.
Since they cannot evaluate the truth-claims about Nature that scientists make, journalists have to use proxy mechanisms to uncover misconduct, e.g., discover other unseemly behaviors by the same actors, unrelated to the research itself. Thus discovering instances of lying, or financial ties, is the only way a journalist can start guessing as to who can be trusted, and then hope that the person who lies about his/her finances is also lying about facts about Nature – a correlation that is hard to prove and is actually quite unlikely except in rare instances of industry/lobby scientists-for-hire.
The actual research misconduct – fudging data, plagiarism, etc – can be uncovered only by other scientists. And they do it whenever they suspect it, and they report the findings in various ways. The traditional method of sending a letter to the editor of the journal that published the suspect paper is so ridiculously difficult that many are now pursuing other venues, be it by notifying a journalist, or going direct, on a blog, or, if the journal is enlightened (COI – see my Profile), by posting comments on the paper itself.
3) That someone then puts all of the gathered information in one place and looks for patterns, overarching themes, connections and figures out what it all means.
Once all the information is gathered in one place, any intelligent person can find patterns. Scientific expertise is not usually necessary for this step. Thus, once the journalists manages to gather all the information (the hard part), he/she is perfectly capable of figuring out the story (the easy part).
4) That someone then writes an article, with a specific audience in mind, showing to the public the previously secret information (often including all of it – the entire raw data sets or documents or transcripts) and explaining what it means.
Journalist advantage – they tend to be good with language and writing a gripping story. If the underlying information is correct, and the conclusions are clear, and the journalist is not afraid to state clearly who is telling the truth and who is lying, the article should be good.
5) That someone then sends the article to the proper venue where it undergoes an editorial process.
The editor who comes up with titles usually screws up this step. Otherwise, especially if nobody cuts out important parts due to length limits, the article should be fine. Hopefully, the venue targets the relevant audience – either experts (who can then police their own) or general public (who can elicit pressure on powers-that-be).
6) If accepted for publication, the article gets published.
Deadline for the next story looms. Back to the grind.
7) The article gets a life of its own – people read (or listen/view) it, comment, give feedback, or follow up with investigation digging up more information that is still not public (so the cycle repeats).
Now that the information is public, people can spread it around (e-mailing to each other, linking to it on their blogs, social networks, etc.). They bring in their own knowledge and expertise and provide feedback in various venues and some are motivated to follow up and dig deeper, perhaps uncovering more information (so the cycle repeats).
Most of science journalism is, thus, not investigative journalism. Most of it is simple reporting of the findings, i.e., second-hand reporting of the investigative reporting done by scientists (Case I). Or, as science reporters are made so busy by their editors, forced to write story after story in rapid succession, stories about many different areas of science, most science reporting in the media is actually third-hand reporting: first-hand was by scientists in journals, second-hand was by press officers of the institutions, and the journalist mainly regurgitates the press releases. As in every game of Broken Telephones/Chinese Whispers , the first reporter is more reliable then the second one in line who is more dependable than the third one and so on. Thus a scientist “going direct” is likely to give a much more reliable account of the findings than the journalist reporting on it.
There are exceptions, of course. Each discussion of science journalism always brings out commenters who shout the names of well-known and highly respectes science journalists. The thing is, those people are not science reporters. They are science journalists only in the sense that ‘Science Writers’ is a subset of the set ‘Science Journalists’. This is a subset that is very much in a privileged position – they are given freedom to write what, when, where and how they want. Thus, over many years, they develop their own expertise.
Carl Zimmer has, over the years, read so many papers, talked to so many experts, and written so many books, articles and blogposts, that he probably knows more about evolution, parasites and E.coli than biology PhDs whose focus is on other areas of biology. Eric Roston probably knows more about carbon than many chemistry PhDs. These guys are experts. And they are writers, not reporters. They do not get assignments to write many stories per week on different areas of science. They are not who I am talking about in this post at all.
Do they do investigative reporting? Sometimes they do, but they chose other venues for it. When George Will lied about climate change data in a couple of op-eds, Carl Zimmer used his blog, not the NYTimes Science section, to dig and expose the facts about the industry and political influences, about George Will’s history on the issue, about cowardly response by Washington Post to the uncovering of these unpleasant facts, etc.
Rebecca Skloot did investigative journalism as well, over many years, and decided to publish the findings in a form of a book, not in a newspaper or magazine. That is not the work of a beat reporter.
Case III: Science Blogger
1) Someone gets a hunch, wiff, a tip from someone or an intuition (or orders from the boss to take a look) that some information exists that is hidden from the public.
Bloggers are often looking for blogging materials from two distinctly different sources: the Tables of Content of scientific journals in the fields they have expertise in, and services that serve press releases (e.g., EurekAlert, ScienceDaily, etc.). They are also usually quite attuned to the mass media, i.e., they get their news online from many sources instead of reading just the local paper.
What many bloggers do and are especially good at doing is comparing the work of Case I and Case II investigative reporters. They can access and read and understand the scientific paper and directly compare it to the press releases and the media coverage (including the writings by other bloggers). Having the needed scientific expertise, they can evaluate all the sources and make a judgment on their quality.
Sometimes the research in the paper is shoddy but the media does not realize it and presents it as trustworthy. Sometimes the paper is good, but the media gets it wrong (usually in a sensationalist kind of way). Sometimes both the paper and the media get it right (which is not very exciting to blog about).
2) That someone then uses a whole suit of methods to discover that secret information, often against the agents that resist the idea of that information becoming available to the public.
Replicating experiments and putting that on the blog is rare (but has been done). But digging through the published data and comparing that to media reports is easy when one has the necessary expertise. Consulting with colleagues, on the rare occasions when needed, is usually done privately via e-mail or publicly on places like FriendFeed or Twitter, and there is no need to include quotes in the blog post itself.
Bloggers have done investigative digging in a journalistic sense as well – uncovering unseemly behavior of people. I have gathered a few examples of investigative reporting by science bloggers before:

Whose investigative reporting led to resignation of Deutch, the Bush’s NASA censor? Nick Anthis, a (then) small blogger (who also later reported on the Animal Rights demonstrations and counter-demonstrations in Oxford in great detail as well).
Who blew up the case of plagiarism in dinosaur palaenthology, the so-calles Aetogate? A bunch of bloggers.
Who blew up, skewered and castrated the PRISM, the astroturf organization designed to lobby the Senate against the NIH Open Access bill? A bunch of bloggers. The bill passed.
Remember the Tripoli 6?
Who pounced on George Will and WaPo when he trotted out the long-debunked lie about global warming? And forced them to squirm, and respond, and publish two counter-editorials? A bunch of bloggers.
Who dug up all the information, including the most incriminating key evidence against Creationists that was used at the Dover trial? A bunch of bloggers.
And so on, and so on, this was just scratching the surface with the most famous stories.

3) That someone then puts all of the gathered information in one place and looks for patterns, overarching themes, connections and figures out what it all means.
This is often a collective effort of multiple bloggers.
4) That someone then writes an article, with a specific audience in mind, showing to the public the previously secret information (often including all of it – the entire raw data sets or documents or transcripts) and explaining what it means.
The target audience of most science blogs is lay audience, but many of the readers are themselves scientists as well.
5) That someone then sends the article to the proper venue where it undergoes an editorial process.
Most blogs are self-edited. Sending a particularly ‘hot’ blog post to a couple of other bloggers asking their opinion before it is posted is something that a blogger may occasionally do.
6) If accepted for publication, the article gets published.
Click “Post”. That easy.
7) The article gets a life of its own – people read (or listen/view) it, comment, give feedback, or follow up with investigation digging up more information that is still not public (so the cycle repeats).
Feedback in comments usually comes in really fast! It is direct, straightforward and does not follow the usual formal kabuki dance that ensures the control and hierarchy remains intact in more official venues.
Other bloggers may respond on their own blogs (especially if they disagree) or spread the link on social networks (especially if they agree).
If many bloggers raise hell about some misconduct and persist in it over a prolonged periods of time, this sometimes forces the corporate media to pick up the hot-potato story despite the initial reluctance to do so. But this applies to all investigative reporting on blogs, not just science.
Also, bloggers are not bound by 20th century journalistic rules – thus the exposure by impersonation, what the conservative activists did to ACORN, is perfectly legitimate way of uncovering dirt in informal venues, but not legit in corporate venues.
One more point that needs to be made here. Different areas of science are different!
Biomedical science is a special case. It is huge. It has huge funding compared to other areas, yet not sufficient to feed the armies of researchers involved in it. It attracts the self-aggrandizing type disproportionately. Much is at stake: patents, contracts with pharmaceutical industry, money, fame, Nobel prizes… Thus it is extremely competitive. It also uses laboratory techniques that are universal and fast, thus it is easy to scoop and get scooped, which fosters the culture of secrecy. It suffers from CNS disease (necessity to publish in GlamourMagz like Cell, Nature and Science). It gets inordinate proportion of media (and blog) attention due to relevance to human health. All those pressures make the motivation to fudge data too strong for some of the people involved – very few, for sure, out of 10,000s involved.
On the other end of the spectrum is, for example, palaeontology. Very few people can be palaeontologists – not enough positions and not enough money. There is near-zero risk of getting scooped as everyone knows who dug what out, where, during which digging season (Aetogate, linked above, was a special case of a person using a position of power to mainly scoop powerless students). Your fossil is yours. The resources are extremely limited and so much depends on luck. Discovering a cool fossil is not easy and if you get your hands on one, you have to milk it for all it’s worth. You will publish not one but a series of papers. First paper is a brief announcement of the finding with a superficial description, the second is a detailed description, the third is the phylogenetic analysis, the fourth focuses on one part of the fossil that can say something new about evolution, etc. And you hope that all of this will become well-known to the general public. The palaeo community is so small, they all already know. They will quibble forever with you over the methodology and conclusions (so many assumptions have to go into methods that analyze old, broken bones). It is the lay audience that needs to be reached, by any means necessary. Many paleontologists don’t even work as university professors but are associated with museums, nature magazines, or are freelancing. The pressure to publish in GlamourMagz is there only as a means to get the attention of the media, not to impress colleagues or rise in careers.
Most of science and most scientists, on the other hand, do not belong to one of these two fields and do not work at high-pressure universities. They do science out of their own curiosity, feel no pressure to publish a lot or in GlamourMagz, do not fear scooping, are open and relaxed and have no motivation to fudge data or plagiarize. They know that the reputation with their peers – the only reputation they can hope to get – is dependent entirely on immaculate work and behavior. Why keep them suspect because two media-prominent sub-sub-disciplines sometimes produce less-than-honest behavior? Why not trust that their papers are good, their press releases correct, their blogging honest, and their personal behavior impeccable? I’d say they are presumed innocent unless proven guilty, not the other way around.
I’d like to see an equivalent of Futurity.org for state universities and small colleges. What a delightful source of cool science that would be!
Update: blogging at its best. After a couple of hit-and-run curmudgeounly comments posted early on, this post started receiving some very thoughtful and useful comments (e.g., especially one by David Dobbs) that are edifying and are helping me learn – which is the point of blogging in the first place, isn’t it?

PLoS & Mendeley live on the Web! Science Hour with Leo Laporte & Dr. Kiki (video)

Leo Laporte and Kirsten Sanford (aka Dr.Kiki) interviewed (on Twit.tv) Jason Hoyt from Mendeley and Peter Binfield from PLoS ONE about Open Access, Science 2.0 and new ways of doing and publishing science on the Web. Well worth your time watching!

Research Triangle Park

My regular readers probably remember that I blogged from the XXVI International Association of Science Parks World Conference on Science & Technology Parks in Raleigh, back in June of this year.
I spent the day today at the headquarters of the Research Triangle Park, participating in a workshop about the new directions that the park will make in the future. It is too early to blog about the results of this session, though the process will be open, but I thought this would be a good time to re-post what I wrote from the June conference and my ideas about the future of science-technology parks – under the fold:

Continue reading

Not-so-self-correcting science: the hard way, the easy way, and the easiest way

Two recent events put in stark relief the differences between the old way of doing things and the new way of doing things. What am I talking about? The changing world of science publishing, of course.
Let me introduce the two examples first, and make some of my comments at the end.
Example 1. Publishing a Comment about a journal article
My SciBling Steinn brought to our collective attention a horrific case of a scientist who spent a year fighting against the editors of a journal, trying to have a Comment published about a paper that was, in his view, erroneous (for the sake of the argument it does not matter if the author of the original paper or the author of the Comment was right – this is about the way system works, er, does not work). You can read the entire saga as a PDF – it will make you want to laugh and cry and in the end scream with frustration and anger. Do not skip the Addendum at the end.
Thanks to Shirley Wu for putting that very long PDF into a much more manageable and readable form so you can easily read the whole thing right here:
null
See? That is the traditional way for science to be ‘self-correcting’….Sure, a particularly egregious example, but it is the system that allows such an example to be a part of that continuum somewhere on its edge – this is not a unique case, just a little bit more extreme than usual.
Janet wrote a brilliant post (hmmm, it’s Janet… was there ever a time I linked to her without noting it was a “brilliant post”? Is it even possible to do?) dissecting the episode and hitting all the right points, including, among others, these two:

Publishing a paper is not supposed to bring that exchange to an end, but rather to bring it to a larger slice of the scientific community with something relevant to add to the exchange. In other words, if you read a published paper in your field and are convinced that there are significant problems with it, you are supposed to communicate those problems to the rest of the scientific community — including the authors of the paper you think has problems. Committed scientists are supposed to want to know if they’ve messed up their calculations or drawn their conclusions on the basis of bad assumptions. This kind of post-publication critique is an important factor in making sure the body of knowledge that a scientific community is working to build is well-tested and reliable — important quality control if the community of science is planning on using that knowledge or building further research upon it.
———-snip———-
The idea that the journal here seems to be missing is that they have a duty to their readers, not just to the authors whose papers they publish. That duty includes transmitting the (peer reviewed) concerns communicated to them about the papers they have published — whether or not the authors of those papers respond to these concerns in a civil manner, or at all. Indeed, if the authors’ response to a Comment on their paper were essentially. “You are a big poopyhead to question our work!” I think there might be a certain value in publishing that Reply. It would, at least, let the scientific community know about the authors’ best responses to the objections other scientists have raised.

Example 2: Instant replication of results
About a month ago, a paper came out in the Journal of the American Chemical Society, which suggested that a reductant acted as an oxidant in a particular chemical reaction.
Paul Docherty, of the Totally Synthetic blog, posted about a different paper from the same issue of the journal the day it came out. The very second comment on that post pointed out that something must be fishy about the reductant-as-oxidant paper. And then all hell broke lose in the comments!
Carmen Drahl, in the August 17 issue of C&EN describes what happened next:

Docherty, a medicinal chemist at Arrow Therapeutics, in London, was sufficiently intrigued to repeat one of the reactions in the paper. He broadcast his observations and posted raw data on his blog for all to read, snapping photos of the reaction with his iPhone as it progressed. Meanwhile, roughly a half-dozen of the blog’s readers did likewise, each with slightly different reaction conditions, each reporting results in the blog’s comment section.

The liveblogging of the experiment by Paul and commenters is here. Every single one of them failed to replicate the findings and they came up with possible reasons why the authors of the paper got an erroneous result. The paper, while remaining on the Web, was not published in the hard-copy version of the journal and the initial authors, the journal and the readers are working on figuring out exactly what happened in the lab – which may actually be quite informative and novel in itself.
Compare and contrast
So, what happened in these two examples?
In both, a paper with presumably erroneous data or conclusions passed peer-review and got published.
In both, someone else in the field noticed it and failed to replicate the experiments.
In both, that someone tried to alert the community that is potentially interested in the result, including the original authors and the journal editors, in order to make sure that people are aware of the possibility that something in that paper is wrong.
In the first example, the authors and editors obstructed the process of feedback. In the second, the authors and editors were not in a position to obstruct the process of feedback.
In the first example, the corrector/replicator tried to go the traditional route and got blocked by gatekeepers. In the second example, the corrector/replicator went the modern route – bypassing the gatekeepers.
If you had no idea about any of this, and you are a researcher in a semi-related field moving in, and you find the original paper via search, what are the chances you will know that the paper is being disputed?
In the first example – zero (until last night). In the second example – large. But in both cases, in order to realize that the paper is contested, one has to use Google! Not just read the paper itself and hope it’s fine. You gotta google it to find out. Most working scientists do not do that yet! Not part of the research culture at this time, unfortunately.
If the Comment was published in the first example, chances that a reader of the paper will then search the later issues of the journal seeking comments and corrections are very small. Thus even if the Comment (and Reply by authors) was published, nobody but a very small inner circle of people currently working on that very problem will ever know.
Back in grad school I was a voracious reader of the literature in my field, including some very old papers. Every now and then I would bump into a paper that seemed really cool. Then I would wonder why nobody ever followed up or even cited it! I’d ask my advisor who would explain to me that people tried to replicate but were not successful, or that this particular author is known to fudge data, etc. That is tacit knowledge – something that is known only by a very small number of people in an Inner Circle. It is a kind of knowledge that is transmitted orally, from advisor to student, or in the hallways at meetings. People who come into the field from outside do not have access to that information. People in the field who live in far-away places and cannot afford to come to conferences do not have access to that information.
Areas of research also go in and out of fashion. A line of research may bump into walls and the community abandons it only to get picked up decades later once the technological advances allow for further studies of the phenomenon. In the meantime, the Inner Circle dispersed, and the tacit knowledge got lost. Yet the papers remain. And nobody knows any more which paper to trust and which one not to. Thus one cannot rely on published literature at all! It all needs to be re-tested all over again! Yikes! How much money, time and effort would have to be put into that!?
Now let’s imagine that lines of research in our two Examples go that way: get abandoned for a while. Let’s assume now that 50 years from now a completely new generation of scientists rediscovers the problem and re-starts studying it. All they have to go with are some ancient papers. No Comment was ever published about the paper in the first Example. Lots of blogging about both afterwards. But in 50 years, will those blogs still exist, or will all the links found on Google (or whatever is used to search stuff online in 50 years) be rotten? What are the chances that the researchers of the future will be able to find all the relevant discussions and refutation of these two papers? Pretty small, methinks.
But what if all the discussions and refutations and author replies are on the paper itself? No problem then – it is all public and all preserved forever. The tacit knowledge of the Inner Circle becomes public knowledge of the entire scientific community. A permanent record available to everyone. That is how science should be, don’t you think?
You probably know that, right now, only BMC, BMJ and PLoS journals have this functionality. You can rate articles, post notes and comments and link/trackback to discussions happening elsewhere online. Even entire Journal Clubs can happen in the comments section of a paper.
Soon, all scientific journals will be online (and probably only online). Next, all the papers – past, present and future – will become freely available online. The limitations of paper will be gone and nothing will prevent publishers from implementing more dynamic approaches to scientific publishing – including on-paper commentary.
If all the journals started implementing comments on their papers tomorrow I would not cry “copycats!” No. Instead, I’d be absolutely delighted. Why?
Let’s say that you read (or at least skim) between a dozen and two dozen papers per day. You found them through search engines (e.g., Google Scholar), or through reference managers (e.g., CiteULike or Mendeley), or as suggestions from your colleagues via social networks (e.g, Twitter, FriendFeed, Facebook). Every day you will land on papers published in many different journals (it really does not matter any more which journal the paper was published in – you have to read all the papers, good or bad, in your narrow domain of interest). Then one day you land on a paper in PLoS and you see the Ratings, Notes and Comments functionality there. You shake your head – “Eh, what’s this weird newfangled thing? What will they come up with next? Not for me!” And you move on.
Now imagine if every single paper in every single journal had those functionalities. You see them between a dozen and two dozen times a day. Some of the papers actually have notes, ratings and comments submitted by others which you – being a naturally curious human being – open and read. Even if you are initially a skeptical curmudgeon, your brain will gradually get trained. The existence of comments becomes the norm. You become primed….and then, one day, you will read a paper that makes you really excited. It has a huge flaw. It is totally crap. Or it is tremendously insightful and revolutionary. Or it is missing an alternative explanation. And you will be compelled to respond. ImmediatelyRightThisMoment!!!11!!!!11!!. In the old days, you’d just mutter to yourself, perhaps tell your students at the next lab meeting. Or even brace yourself for the long and frustrating process (see Example 1) of submitting a formal Comment to the journal. But no, your brain is now primed, so you click on “Add comment”, you write your thoughts and you click “Submit”. And you think to yourself “Hey, this didn’t hurt at all!” And you have just helped thousands of researchers around the world today and in the future have a better understanding of that paper. Permanently. Good job!
That’s how scientific self-correction in real time is supposed to work.

Praxis

A run-down of good recent stuff, highly recommended for your weekend reading and bookmarking:
PLoS One: Interview with Peter Binfield:

…In my view PLoS ONE is the most dynamic, innovative and exciting journal in the world, and I am proud to work on it.
In many ways PLoS ONE operates like any other journal however it diverges in several important respects. The founding principle of PLoS ONE was that there are certain aspects of publishing which are best conducted pre-publication and certain aspects which are best conducted post-publication. The advent of online publishing has allowed us to take a step back and re-evaluate these aspects of how we publish research, without the burden of centuries of tradition. In this way, we have been able to experiment with new ways of doing things which may result in dramatic improvements in the entire process of scholarly publication.
The most important thing which has come out of this premise is that unlike almost every other journal in the world, we make no judgment call whatsoever on the ‘impact’ or ‘significance’ or ‘interest level’ of any submission. What this means is that if an article appropriately reports on well-conducted science, and if it passes our peer review process (which determines whether it deserves to join the scientific literature) then we will publish it. In this way, no author should ever receive the message that their article is scientifically sound but ‘not interesting enough’ for our journal, or that their article is ‘only suited to a specialized audience’. As a result, we short circuit the vicious cycle of “submit to a ‘top tier’ journal; get reviewed; get rejected; submit to the next journal down the list; repeat until accepted” and we are therefore able to place good science into the public domain as promptly as possible, with the minimum of burden on the academic community….

The evolution of scientific impact (also a good FriendFeed thread about it):

What is clear to me is this – science and society are much richer and more interconnected now than at any time in history. There are many more people contributing to science in many more ways now than ever before. Science is becoming more broad (we know about more things) and more deep (we know more about these things). At the same time, print publishing is fading, content is exploding, and technology makes it possible to present, share, and analyze information faster and more powerfully.
For these reasons, I believe (as many others do) that the traditional model of peer-reviewed journals should and will necessarily change significantly over the next decade or so.

A threat to scientific communication (read excellent responses by Peter Murray-Rast and Bjoern Brembs and a thread on FriendFeed):

Sulston argues that the use of journal metrics is not only a flimsy guarantee of the best work (his prize-winning discovery was never published in a top journal), but he also believes that the system puts pressure on scientists to act in ways that adversely affect science – from claiming work is more novel than it actually is to over-hyping, over-interpreting and prematurely publishing it, splitting publications to get more credits and, in extreme situations, even committing fraud.
The system also creates what he characterises as an “inefficient treadmill” of resubmissions to the journal hierarchy. The whole process ropes in many more reviewers than necessary, reduces the time available for research, places a heavier burden on peer review and delays the communication of important results.

Why do we still publish scientific papers?:

I agree with the need to filter papers, but I want to be in control of the filter. I don’t want editors to control my filter and I definitely don’t want a monopolist like Thomson to muck up my filter. I don’t care where something is published, if it’s in my direct field I need to read it, no matter how bad it is. If a paper is in my broader field, I’d apply some light filtering, such as rating, comments, downloads, author institute, social bookmarks, or some such. If the paper is in a related field, I’d like to only read reviews of recent advances. If it’s in an unrelated field, but one I’m interested in nonetheless, I’d only want to see the news-and-views article, because I wouldn’t understand anything else anyway. For everything else, titles, headlines or newsreports are good enough for browsing. All of this can be done after publishing and certainly doesn’t require any artificial grouping by pseudo-tags (formerly called journals).

Science Jabberwocky (how to read/understand a scientific paper when you don’t know the technical terms):

I have to confess that in areas outside mine, there seems to be a terrible array of words no more obvious than ‘brillig’ and ‘slithy’. And words that look familiar, like ‘gyre and gimble’, but which don’t look like they are supposed to mean what I’m used to them meaning.

Media tracking:

The theropod behaviour paper that I have been boring you all with this last week or so has been the first time I have had decent control over the media access to my work and by extension the first time I have had a good idea of what happened to the original press release. I know what I sent to whom and when and thus can fairly easily track what happened afterwards to record the spread and exchange of information from that origin. In the past on the Musings I have targeted inaccuracies in news reports of scientific stories but without knowing the exact details of a story (I may have access to the press release but without knowing who it went to). Well, not so this time and as a result the pattern of reporting I can see is both interesting and informative both from understanding how the media works and knowing how to get your own work publicised.

Rapid evolution of rodents: another PLoS ONE study in the media:

Although media attention and coverage is not, and should certainly not be, the only criterion for scientific “quality” (whatever that is!), it is further testimony of the advantage to publish in “Open Acess”-journals in general, and PLoS ONE in particular. This study is also interesting because it shows the value of museum collections as a source for ecological and evolutionary research, a point that Shawn Kuchta has repeatedly emphasized in our lab-meetings (and which I completely agree with, of course).

20 Quick Points from ‘The World Is Open: How Web Technology Is Revolutionizing Education’:

9. Open Access Journals (Opener #5): The publishing world is increasing becoming open access. Open access journals in the healthcare area provide invaluable information to those in the developing world. The Public Library of Science (PLoS) offers free peer-reviewed scientific journals. Scientists who publish in PLoS journals might present their work in SciVee. SciVee allows the user to hear or see the scientist explain his or her research in what is known as pubcasts.

Pedagogy and the Class Blog:

I’ve been using blogs in my teaching for several years now, so I wanted to share a few ideas that have worked for me. I’m no expert and I’m still casting about for solutions to some of the more nagging problems, but after thirteen course blogs spread across seven semesters (I just counted!), I have obtained a small measure of experience. In other words, I keep making mistakes, but at least not the same ones over and over.

Practicing Medicine in the Age of Facebook:

In my second week of medical internship, I received a “friend request” on Facebook, the popular social-networking Web site. The name of the requester was familiar: Erica Baxter. Three years earlier, as a medical student, I had participated in the delivery of Ms. Baxter’s baby. Now, apparently, she wanted to be back in touch…..

Are young people of today Relationally Starved?:

The more I toss it around, I’m not so sure that our students are “relationally starved.” I just think that relationships look much different today than they have in generations past. Their relationships are more fluid and maybe a little more fragile. It is obvious that advances in technology have changed the way relationships are built and maintained (it has for me). This doesn’t mean that children aren’t in need of the same nurturing and love that we might have had, but there are other layers that we need to ask them about. And I think that might be the key, ASK THEM!

The New Yorker vs. the Kindle:

Now, let’s imagine for a moment that we are back in the 15th century, to be precise just shortly after 1439, when Johannes Gensfleisch zur Laden zum Gutenberg invented movable type printing. I can only imagine the complaints that Baker would have uttered in the local paper (which was, of course, copied by hand from the original dictation). What? Only one title on the catalog? (The Bible.) Oh, and the fonts are sooo boring compared to handwriting. And no colors! And the quality of the drawings, simply unacceptable. This movable type printing thing will never ever replace the amanuenses, it will simply die as yet another “modern invention” and things will keep being just the same as they have been throughout what they at the time didn’t yet call the Middle Ages.

The New Yorker & The News Biz:

After many years, I am finally subscribing to the New Yorker again. Not in print, but via their Digital Reader. I’m blogging about it because I like their model: the Digital Reader adds something I wouldn’t get from the library version, and I feel like this new model bears watching as we migrate from print to online.

The psychology of reading for pleasure:

According to a neurological study that Nell performed, processing demands are higher with books than other media (movies, television) but that also means that when you are absorbed in a book, you are more likely to block out distractions. While readers describe being absorbed in a book as “effortless,” their brains are actually intensely active. As one critic said, this is not an escape from thinking, it’s an escape into thinking – intensely, and without distraction.

How Twitter works in theory:

The key to Twitter is that it is phatic – full of social gestures that are like apes grooming each other. Both Google and Twitter have little boxes for you to type into, but on Google you’re looking for information, and expecting a machine response, whereas on Twitter you’re declaring an emotion and expecting a human response. This is what leads to unintentionally ironic newspaper columns bemoaning public banality, because they miss that while you don’t care what random strangers feel about their lunch, you do if its your friend on holiday in Pompeii.
——————–
For those with Habermas’s assumption of a single common public sphere this makes no sense – surely everyone should see everything that anyone says as part of the discussion? In fact this has never made sense, and in the past elaborate systems have been set up to ensure that only a few can speak, and only one person can speak at a time, because a speech-like, real-time discourse has been the foundational assumption.
Too often this worldview has been built into the default assumptions of communications online; we see it now with privileged speakers decrying the use of anonymity in the same tones as 19th century politicians defended hustings in rotten boroughs instead of secret ballots. Thus the tactics of shouting down debate in town halls show up as the baiting and trollery that make YouTube comments a byword for idiocy; when all hear the words of one, the conversation often decays.

Blogging Evolution (PDF):

I describe the general characteristics of blogs, contrasting blogs with other of WWW formats for self-publishing. I describe four categories for blogs about evolutionary biology: “professional,” “amateur,” “apostolic,” and “imaginative.” I also discuss blog networks. I identify paradigms of each category. Throughout, I aim to illuminate blogs about evolutionary biology from the point of view of a
user looking for information about the topic. I conclude that blogs are not the best type of source for systematic and authoritative information about evolution, and that they are best used by the information-seeker as a way of identifying what issues are of interest in the community of evolutionists and for generating research leads or fresh insights on one’s own work.

What Do Mathematicians Need to Know About Blogging?:

Steven Krantz asked me to write an opinion piece about math blogging in the Notices of the American Mathematical Society. I asked if I could talk about this column on my blog, and even have people comment on drafts of it before it comes out in the Notices. He said okay. So, just to get the ball rolling, let me ask: what do you think mathematicians need to know about blogging?

Five Key Reasons Why Newspapers Are Failing and Five Key Reasons Why Newspapers Are Failing, pt. 2:

Journalists are pretty good at working the scene of a disaster. They’ll tell you what happened, who did it, and why.
But when it comes to the disaster engulfing their own profession, their analysis is less rigorous. An uncharacteristic haze characterizes a lot of the reporting and commentary on the current crisis of the industry.
It could have been brought on by delicacy, perhaps romanticism. And since it is not just any crisis, but a definitive one–one that seems to mean an end to the physical papers’ role in American life as we have come to know it–perhaps there’s a little bit of shell-shock in the mix as well.

Online Community Building: Gardening vs Landscaping:

The Gardener creates an ecosystem open to change, available to new groups, and full of fresh opportunities to emerge naturally. The approach is focused on organic collaboration and growth for the entire community. The gardener is simply there to help, cultivate, and clear the weeds if/when they poke up.
The Landscaper creates an ecosystem that matches a preconceived design or pattern. The approach is focused on executing a preconceived environment, regardless of how natural or organic it may be for the larger area. The landscaper is there to ensure that everything stays just as planned.

Don’t Be Such a Scientist: Talking Substance in an Age of Style (book review):

So I end up feeling a bit torn. He’s telling us “Don’t be such a scientist”, and it’s true that there are many occasions when the scientific attitude can generate unnecessary obstacles to accomplishing our goals. At the same time, though, I want to say “Do be such a scientist”, because it’s part of our identity and it makes us stand out as unusual and, like Randy, interesting, even if it sometimes does make us a bit abrasive. But, you know, some of us revel in our abrasiveness; it’s fun.

This has also been in the news a lot last week:
Threats to science-based medicine: Pharma ghostwriting
Wyeth, ‘Ghost-Writing’ and Conflict of Interest
More On Ghostwriting, Wyeth and Hormone Replacement Therapy
Wyeth’s ghostwriting skeletons yanked from the closet
Ghostwriters in the sky
Quickie Must-Read Link … (probably the best commentary of them all).
Several recent posts on the topic dear to my heart – the so-called “civility” in public (including online) discourse:
How Creationism (and Other Idiocies) Are Mainstreamed:

One of the things that has enabled the mainstreaming of various idiocies, from altie woo, to creationism, to global warming denialism is mainstream corporate media’s inability to accurately describe lunacy. For obvious reasons, ‘family-friendly’ newspapers and teevee can’t call creationists, birthers, or deathers batshit lunatic or fucking morons. This is where ‘civility’ (beyond the basic norms of decency when dealing with the mentally ill) and pretensions of ‘balance’ utterly fail.

Weekend Diversion: How to Argue:

You are, of course, free to argue however you like. But if you want to argue on my site, you’re really best off remembering this hierarchy, and staying as high as possible on it. Most of you do pretty well, but this has served me well in general, and I hope it helps you to see things laid out like this. And if not, at least you got a great song out of it!

When an image makes an argument:

Along similar lines to a frequentist interpretation of the strata, maybe this pyramid is conveying something about the ease or difficulty inherent in different types of engagement. It doesn’t take a lot of effort to call someone an asshat, but understanding her argument well enough to raise a good counterexample to it may take some mental labor. If this is the rhetorical work that the pyramidal layout does here, it may also suggest a corresponding hierarchy of people who have the mental skills to engage in each of these ways — making the people at the tippy-top of the pyramid more elite than those using the strategies from lower strata.

How to Argue…:

White men are sufficiently privileged enough to demand that they be treated respectfully while white women, at best, can expect to be presented with contradiction and counterargument. When I saw the category “responding to tone” I thought of the “angry black man” who, although perhaps right, is castigated for his anger and lack of civility for not conforming to the norms of white society. If you’re a non-white woman? Then, the best you can do is hope to not be denied food and shelter if you don’t fuck your husband enough (h/t to Free-Ride for pointing this article out), but you only expect to be part of the discussion if you’re allowed to be.
————————–
The call to civility is a frequent tactic to derail the discussion and is as much of an ad hominem attack as calling someone a cocknozzle. It fails to recognize the perspective of the other party or appreciate why they might be angry.

More on the topic:
Dr. Isis Learns to Argue:

I am lucky to have such thoughtful commenters. When I wrote the previous post I had no idea that bleeding from my vagina was clouding my judgement. Then, just when I thought I had cleared enough of the estrogen from my girl brain to understand, I learned that this was all a carefully planned tactic to teach me a lesson. Damn! I hate when that happens!

Weekend Diversion: How to argue…and actually accomplish something:

Here we arrive at the meat of the matter. Once having accomplished more than about 300 ms worth of consideration of a given topic, people are highly resistant to the idea that their rationale, conclusions and evidence base might actually be wrong. And the wronger the consideration might be, the more resistant to acknowledgment is the individual. We might think of this as the intrapersonal Overton window.

A Tale of Two Nations: the Civil War may have been won by the North, but in truth the South never emotionally conceded.:

The Civil War may have been won by the North, but in truth the South never emotionally conceded.
The Town Hall mobs, the birthers, the teabaggers are all part of that long line of “coded” agitators for the notions of white entitlement and “conservative values.”
Of course, this conservative viewpoint values cheap labor and unabated use of natural resources over technological and economic innovation. It also – and this is its hot molten core – fundamentally believes that white people are born with a divine advantage over people of other skin colors, and are chosen by God to lead the heathen hordes.
That a Town Hall mob is itself a heathen horde would never occur to the economically stressed whites who listen to the lies of the likes of Glenn Beck, Sean Hannity, Rush Limbaugh and Lou Dobbs. Lies that confirm an emotionally reinforcing worldview – however heinous – become truths for those in psychological need of feeling superior and chosen.

I remember an America where black men didn’t grow up to be President.:

And all of them are asking for their America back. I wonder which America that would be?
Would that be the America where the Supreme Court picks your president instead of counting all the votes? Would that be the America where rights to privacy are ignored? Would that be the America where the Vice President shoots his best friend in the face? Or would that be the America where an idiot from Alaska and a college drop-out with a radio show could become the torchbearers for the now illiterate Republican party?
I fear that would not be the America they want back. I fear that the America they want back is the one where black men don’t become President.
I remember that America. In that America people screaming at public gatherings were called out for what they were – an angry mob. Of course, they wore sheets to cover up their bad hair. Let’s be clear about something: if you show up to a town hall meeting with a gun strapped to your leg, the point you are trying to make isn’t a good one. Fear never produced anything worthwhile.

In America, Crazy Is a Preexisting Condition:

The tree of crazy is an ever-present aspect of America’s flora. Only now, it’s being watered by misguided he-said-she-said reporting and taking over the forest. Latest word is that the enlightened and mild provision in the draft legislation to help elderly people who want living wills — the one hysterics turned into the “death panel” canard — is losing favor, according to the Wall Street Journal, because of “complaints over the provision.”

Two oldies but goodies:
Atheists and Anger:

One of the most common criticisms lobbed at the newly-vocal atheist community is, “Why do you have to be so angry?” So I want to talk about:
1. Why atheists are angry;
2. Why our anger is valid, valuable, and necessary;
And 3. Why it’s completely fucked-up to try to take our anger away from us.

Atheists and Anger: A Reply to the Hurricane:

Now my replies to the critics. I suppose I shouldn’t bother, I suppose I should just let it go and focus on the love. But I seem to be constitutionally incapable of letting unfair or inaccurate accusations just slide. So here are my replies to some of the critical comments’ common themes.

The Privilege of Politeness:

One item that comes up over and over in discussions of racism is that of tone/attitude. People of Color (POC) are very often called on their tone when they bring up racism, the idea being that if POC were just more polite about the whole thing the offending person would have listened and apologized right away. This not only derails the discussion but also tries to turn the insults/race issues into the fault of POC and their tone. Many POC have come to the realization that the expectation of politeness when saying something insulting is a form of privilege. At the core of this expectation of politeness is the idea that the POC in question should teach the offender what was wrong with their statement. Because in my experience what is meant by “be polite” is “teach me”, teach me why you’re offended by this, teach me how to be racially sensitive and the bottom line is that it is no one’s responsibility to teach anyone else. And even when POC are as polite as possible there is still hostility read into the words because people are so afraid of being called racist that they would rather go on offending than deal with the hard road of confronting their own prejudices.

Science Online London 2009 – now in Second Life

Science Online London is next week. I really wanted to go this year, but hard choices had to be made….eh, well.
For those of you who, like me, cannot be there in person, there are plenty of ways to follow the meeting virtually. Follow @soloconf and the #solo09 hashtag on Twitter. Join the FriendFeed room. Check out the Facebook page. And of course there will be a lot of blogging, including in the Forums at Nature Network.
And for those of you who have computers with enough power and good graphics cards, another option is to follow the conference in Second Life – check that link to see how.

Cameron Neylon on Article Level Metrics

On Vimeo:

Article-level Metrics from PLoS on Vimeo.

Praxis

Open Access and the divide between ‘mainstream’ and ‘peripheral’ science (also available here and here) by Jean-Claude Guédon is a Must Read of the day. Anyone have his contact info so I can see if he would come to ScienceOnline’10?
There is a whole bunch of articles about science publication metrics in the latest ESEP THEME SECTION – The use and misuse of bibliometric indices in evaluating scholarly performance. Well worth studying. On article-level metrics, there are some interesting reactions in the blogosphere, by Deepak Singh, Bjoern Brembs, Duncan Hull, Bill Hooker and Abhishek Tiwari. Check them out. Of course, all of those guys are also on FriendFeed where more discussion occured.
Can someone use FOIA to sneak a peak into your grant proposal and check-out your preliminary data? See the discussion on DrugMunkey blog, on Dr.Isis’ blog and on Heather Etchevers’ blog. My beef: don’t use the term “Open Access” for this as it is not related. This is not even Open Notebok Science – which, and I am on record in several places about this – MUST be voluntary and does not fit everyone.

Gavin Yamey, the Senior Magazine Editor of PLoS Medicine, is currently on sabbatical from the journal after being awarded a “mini-fellowship” from the Kaiser Family Foundation to undertake a project as a reporter in East Africa and Sudan.

His first two posts about this: Reporting from East Africa and Sudan and Far from the reach of global health programs.
The “article of the future” by Cell/Elsevier, analyzed by DrugMonkey, Kent Anderson, Marshall Kirkpatrick and Martin Fenner.

Books: ‘Bonk: The Curious Coupling of Science and Sex’ by Mary Roach

A few years ago, I read Mary Roach’s first book, Stiff: The Curious Lives of Human Cadavers and absolutely loved it! One of the best popular science books I have read in a long time – informative, eye-opening, thought-provoking and funny. Somehow I missed finding time to read her second (Spook: Science Tackles the Afterlife – I guess just not a topic I care much about), but when her third book came out, with such a provocative title as Bonk: The Curious Coupling of Science and Sex, I could not resist.
And I was not disappointed. It is informative, eye-opening, thought-provoking and funny. The language we use to talk about sex (and death) is so rich, and so full of thinly (or thickly) veiled allusions, that playing with that language is easy. Puns and double-entendres come off effortlessly and yet never seem to grow old. And the effect of interspersing serious discussion of science with what amounts to, essentially, Kindergarten humor, makes the humor effective. I guess it is the effect of surprise. The same humor in a different context (or outside of any context) may not be as effective or funny. The book made me laugh out loud on many occasions, startling the other B-767 passengers on the trans-Atlantic flight a couple of weeks ago (if it was B-777, as American Airlines promised, I would have slept, but the smaller airplane made that impossible, so I read about sex instead).
I should not point out any specific examples of research described in the book – there’s so much of it – as I don’t want to take the wind out of Scicurious’ sails: she uses the book as a starting point for many of her Friday Weird Science posts.
And I will not even attempt to write a real book review (see the review by Scicurious and the series of posts on The Intersection for more details. Also check out Greta Christina and Dr.Joan for different takes).
Instead, I will mention something that I kept noticing over and over again in each chapter. An obsession of mine, or a case of a person with a hammer seeing nails everywhere, you decide.
On one hand, the history of science shows a trajectory of ever improving standards of research, more and more stringent criteria for statistics and drawing conclusions from the data, more and more stringent ethical criteria for the use of animal and human subjects in research, etc. As the time goes on, the results of scientific research are becoming more and more reliable (far from 100%, of course, but a huge improvement over Aristotle, Galen or the Ancient Chinese who could write down their wildest ideas with authoritative flair).
On the other hand, the language of science has become, over time, more and more technical and unintelligible to a lay reader. The ancient ‘scientific’ and ‘medical’ scripts, the books of 300 years ago, the Letters to the Academy of 200 years ago, the early scientific papers of 100 years ago – all of those were readable and understandable by everyone who could read. Of course, in the past, only the most educated sliver of the society was literate. Today, most people are literate (ignoring some geographical difference in the rates of literacy for the moment). But even the most educated sliver of the society, unless they are experts in the same scientific field, cannot understand a scientific paper.
Thus, as the science gets ever more reliable through history, it also becomes less and less understandable to an educated lay reader. Why is that so?
In the past, the educated lay reader was the intended audience for the scientific and medical writings. Today, the intended audience are colleagues. The papers are hidden behind paywalls and accessible only to people in big First World research institutions where the libraries have sufficient funds to pay for journal subscriptions. The communication to the lay audience is relegated to the non-experts: the media (which does an awful job of it) and science writers (who often do a great job, but their audience is severely limited to self-selected science aficionados).
I have been wondering for a while now (see the end of this post for an early example – and we had an entire session on the topic at ScienceOnline’09) if Open Access and the new metrics (that include media/blog coverage, downloads and bookmarks – all requiring that as many people as possible can understand the paper itself) will prompt authors of scientific papers to write keeping broader audiences in mind. Even if the “Materials and Methods” and “Results” sections need to remain technical, perhaps the Abstract, Introduction and Discussion (and in more and more journals also the “Author’s Summary”) will become more readable? At least the titles should be clear – and sometimes funny.
Last week I asked (on Twitter, FriendFeed and Facebook – but FriendFeed, again, proved to be the best platform for this kind of inquiry) for examples of witty, normal-language titles of scientific papers. You can see some responses here and everyone reminded me of NCBI ROFL, the blog that specializes in finding wacky papers with wacky titles. Many, but certainly not all, of such titles indeed cover the science of sex.
Do you see this trend towards abandoning unreadable scientese (at least in titles) happening now or in the near future? Is it more likely to happen in OA journals? Do you have good examples?
In the meantime, watch Mary Roach – see why humor is an important aspect of science communication to lay audiences:

Measuring scientific impact where it matters

Everyone and their grandmother knows that Impact Factor is a crude, unreliable and just wrong metric to use in evaluating individuals for career-making (or career-breaking) purposes. Yet, so many institutions (or rather, their bureaucrats – scientists would abandon it if their bosses would) cling to IF anyway. Probably because nobody has pushed for a good alternative yet. In the world of science publishing, when something needs to be done, usually people look at us (that is: PLoS) to make the first move. The beauty of being a path-blazer!
So, in today’s post ‘PLoS Journals – measuring impact where it matters’ on the PLoS Blog and everyONE blog, PLoS Director of Publishing Mark Patterson explains how we are moving away from the IF world (basically by ignoring it despite our journals’ potential for marketing via their high IFs, until the others catch up with us and start ignoring it as well) and focusing our energies in providing as many as possible article-level metrics instead. Mark wrote:

Article-level metrics and indicators will become powerful additions to the tools for the assessment and filtering of research outputs, and we look forward to working with the research community, publishers, funders and institutions to develop and hone these ideas. As for the impact factor, the 2008 numbers were released last month. But rather than updating the PLoS Journal sites with the new numbers, we’ve decided to stop promoting journal impact factors on our sites all together. It’s time to move on, and focus efforts on more sophisticated, flexible and meaningful measures.

In a series of recent posts, Peter Binfield, managing editor of PLoS ONE, explained the details of article-level metrics that are now employed and displayed on all seven PLoS journals. These are going to be added to and upgraded regularly, whenever we and the community feel there is a need to include another metric.
What we will not do is try to reduce these metrics to a single number ourselves. We want to make all the raw data available to the public to use as they see fit and we will all watch as the new standards emerge. We feel that different kinds of metrics are important to different people in different situations, and that these criteria will also change over time.
A paper of yours may be important for you to be seen by your peers (perhaps for career-related reasons which are nothing to frown about) in which case the citation numbers and download statistics may be much more important than the bookmarking statistics or the media/blog coverage or the on-article user activity (e.g., ratings, notes and comments). At least for now – this may change in the future. But another paper you may think would be particularly important to be seen by physicians around the world (or science teachers, or political journalists, etc.), in which case media/blog coverage numbers are much more important for you than citations – you are measuring your success by how broad an audience you could reach.
This will differ from paper to paper, from person to person, from scientific field to field, from institution to institution, and from country to country. I am sure there will be people out there who will try to put those numbers into various formulae and crunch the numbers and come up with some kind of “summary value” or “article-level impact value” which may or may not become a new standard in some places – time will tell.
But making all the numbers available is what is the most important for the scientific community as a whole. And this is what we will provide. And then the others will have to start providing them as well because authors will demand to see them. Perhaps this is a historic day in the world of science publishing….

An Innovative Use of Twitter: monitoring fish catch! Now published.

A few months ago, I posted about a very innovative way of using Twitter in science – monitoring fish catch by commercial fishermen.
The first phase of the study is now complete and the results are published in the journal Marine and Coastal Fisheries: Dynamics, Management, and Ecosystem Science 2009; 1: 143-154: Description and Initial Evaluation of a Text Message Based Reporting Method for Marine Recreational Anglers (PDF) by M. Scott Baker Jr. and Ian Oeschger. It is relatively short and easy to read, so I recommend you take a look.
The next phase will continue with the program, with refinements, and will also include records of fish catch from fishing tournaments. Also, I hope to see this study presented at ScienceOnline’10 next January as an example of the forward-looking use of modern online technologies for collection of scientific data by citizen scientists.

Open Access in Belgrade

As you know, I gave two lectures here in Belgrade. The first one, at the University Library on Monday, and the second one at the Oncology Institute of the School of Medicine at the University of Belgrade. As the two audiences were different (mainly librarians/infoscientists at the first, mainly professors/students of medicine at the second) I geared the two talks differently.
You can listen to the audio of the entire thing (the second talk) here, see some pictures (from both talks) here and read (in Serbian) a blog post here, written by incredible Ana Ivkovic who organized my entire Belgrade “tour” this year.
The second talk was, at the last minute, moved from the amphitheater to the library, which was actually good as the online connection is, I hear, much much better in the library. Library got crowded, but in the end everyone found a chair. What I did, as I usually do, was to come in early and open up all the websites I wanted to show in reverse chronological order, each in a separate window. Thus, the site I want to show first is on top at the beginning. When I close that window, the second site is the top window, then the third, etc. Thus I do the talk by closing windows instead of opening them (and hoping and praying that would not take too much time).
Knowing how talks usually go in the States, I prepared to talk for about 50 minutes. But, when I hit the 50 minute mark, I realized that nobody was getting restless – everyone was looking intently, jotting down URLs of sites I was showing, nodding….so I continued until I hit 60 minutes as which time I decided to wrap up and end. Even then, nobody was eager to get up and leave. I was hoping I’d get a question anyway….and sure, I got 45 minutes of questions. Then another 20 minutes or so of people approaching me individually to ask questions….
I used the Directory of Open Access Journals as the backdrop to give a brief history of the Open Access movement, the difference between Free Access and Open Access and the distinction between Green OA and Gold OA.
Then I used the PLoS.org site to explain the brief history of PLoS and the differences between our seven journals. Of course, this being medical school, I gave some special consideration to PLoS Medicine.
Then I used the Ida – Darwinius massillae paper to explain the concept of PLoS ONE, how our peer-review is done and to show/demonstrate the functionalities on our papers, e.g., ratings, notes, comments, article-level metrics and trackbacks.
Then I used the Waltzing Matilda paper to enumerate some additional reasons why Open Access is a Good.Thing.
Trackbacks were also a good segue into the seriousness by which the scientific and medical community is treating blogs these days. I showed Speaking of Medicine and EveryONE blog as examples of blogs we use for outreach and information to our community.
I showed and explained ResearchBlogging.org (which they seemed to particularly be taken with and jotted down the URL), showed and explained the visibility and respect of such blogging networks as Scienceblogs.com and Nature Network and then Connotea as an example of various experiments in Science 2.0 that Nature is conducting.
I put in a plug for ScienceOnline conferences and the Open Laboratory anthologies as yet another proof how seriously Science 2.0 and science blogging is now being taken in the West. Then showed 515 scientists on Twitter, The Life Scientists group and Medicine 2.0 Microcarnival on FriedFeed as examples of the ways scientists are now using microblogging platforms for communication and collaboration. I pointed out how Pawel Szczesny, through blogging and FriendFeed, got collaborations, publications, and in the end, his current job.
Then I described Jean-Claude Bradley’s concept (and practice) of Open Notebook Science and showed OpenWetWare as a platform for such work. I pointed out that Wikipedia and wiki-like projects are now edited by scientists, showing the examples of A Gene Wiki for Community Annotation of Gene Function, BioGPS and ChemSpider and ended by pointing out a couple of examples of the ways the Web allows citizen scientists to participate in massive collaborative research projects
But probably the most important part of the talk was my discussion of the drawbacks of Impact Factor and the current efforts to develop Article-level metrics to replace it – something that will be particularly difficult to change in developing countries yet is essential especially for them to be cognizant of and to move as fast as they can so as not to be left behind as the new scientific ecosystem evolves.

Twitter and Science presentation from the 140 Characters Conference

A bunch of interesting Twitterers aggregated in NYC a couple of days ago at the 140 characters conference, discussing various aspects of and uses of Twitter. One of the sessions was about Twitter and Science, led by @thesciencebabe and @jayhawkbabe. I am very jealous I could not be there, but we can all watch the video of their session:

Happy to see the last slide, with @PLoS as one of the recommended Twitter streams to follow for those interested in science.

Why or why not cite blog posts in scientific papers?

As the boundaries between formal and informal scientific communication is blurring – think of pre-print sites, Open Notebook Science and blogs, for starters – the issue of what is citable and how it should be cited is becoming more and more prominent.
There is a very interesting discussion on this topic in the comments section at the Sauropod Vertebra Picture of the Week blog, discussing the place of science blogs in the new communication ecosystem and if a blog post can be and should be cited. What counts as a “real publication”? Is the use of the phrase “real publication” in itself misleading?
You may remember that I have touched on this topic several times over the past few years (as two of my blog posts got cited in scientific papers), most recently in the bottom third (after the word “Except…”) of this post, where I wrote:

National Library of Medicine even made some kind of “official rules for citing blogs” which are incredibly misguided and stupid (and were not changed despite some of us, including myself, contacting them and explaining why their rules are stupid – I got a seemingly polite response telling me pretty much that my opinion does not matter). Anyway, how can anyone make such things ‘official’ when each journal has its own reference formatting rules? If you decide to cite a blog post, you can pretty much use your own brain and put together a citation in a format that makes sense.
The thing is, citing blogs is a pretty new thing, and many people are going to be uneasy about it, or ignorant of the ability and appropriateness to cite blogs, or just so unaware of blogs they would not even know that relevant information can be found on them and subsequently cited. So, if you see that a new paper did not cite your paper with relevant information in it, you can get rightfully angry, but if you see that a new paper did not cite your blog post with relevant information in it, you just shrug your shoulders and hope that one day people will learn….
One of the usual reasons given for not citing blog posts is that they are not peer-reviewed. Which is not true. First, if the post contained errors, readers would point them out in the comments. That is the first layer of peer review. Then, the authors of the manuscript found and read a blog post, evaluated its accuracy and relevance and CHOSE to use it as a reference. That is the second layer of peer-review. Then, the people who review the manuscript will also check the references and, if there is a problem with the cited blog post, they will point this out to the editor. This is the third layer of peer-review. How much more peer-review can one ask for?
And all of that ignores that book chapters, books, popular magazine articles and even newspaper articles are regularly cited, not to mention the ubiqutous “personal communication”. But blogs have a bad rep, because dinosaur corporate curmudgeon journalists think that Drudge and Powerline are blogs – the best blogs, actually – and thus write idiotic articles about the bad quality of blogs and other similar nonsense. Well, if you thought Powerline is the best blog (as Time did, quite intentionally, in order to smear all of the blogosphere by equating it with the very worst right-wing blathering idiotic website that happens to use a blogging software), you would have a low of opinion of blogs, too, wouldn’t you?
But what about one’s inability to detect relevant blog posts, as opposed to research papers to cite? Well, Google it. Google loves blogs and puts them high up in searches. If you are doing research, you are likely to regularly search your keywords not just on MedLine or Web Of Science, but also on Google, in which case the relevant blog posts will pop right up. So, there is no excuse there.

But, some will say, still….a blog post is not peer-reviewed!
Remember that the institution of peer review is very recent. It developed gradually in mid-20th century. None of the science published before that was peer reviewed. Yup, only one of Einstein’s papers ever saw peer review.
Much of the science published today is not peer reviewed either as it is done by industry and by the military and, if published at all, is published only internally (or on the other extreme: citizen science which is published on Flickr or Twitter!). But we see and tend to focus only on the academic research that shows up in academic journals – a tip of the iceberg.
If you think that the editorial process is really important, remember that manuscripts, in their final version, need to be submitted by the authors in the form and format ready for publication. It is not the job of editors to rewrite your poorly written manuscript for you. Thus, scientific papers, even those that went through several rounds of peer review on content are, just like blog posts, self-edited on style, grammar and punctuation (and comprehensibility!). What is the difference between peer reviewers and blog commenters? There are more commenters.
Then, think about the way gradual moving away from the Impact Factor erodes the importance of the venue of publication. This kind of GlamorMagz worship is bound to vanish as article-level metrics get more broadly accepted – faster in less competitive areas of research and in bigger countries, slower in biomedical/cancer research and in small countries.
As the form of the scientific paper itself becomes more and more dynamic and individuals get recognized for their contributions regardless of the URL where that contribution happens, why not cite quality blog posts? Or quality comments on papers themselves?

Science & Technology Parks – what next?

As you may have noticed if you saw this or you follow me on Twitter/FriendFeed/Facebook, I spent half of Tuesday and all of Wednesday at the XXVI International Association of Science Parks World Conference on Science & Technology Parks in Raleigh. The meeting was actually longer (starting on Sunday and ending today), but I was part of a team and we divided up our online coverage the best we could do.
IASP2009 001.jpgChristopher Perrien assembled a team (including his son) to present (and represent) Science In The Triangle, the new local initiative. They manned a booth at which they not only showcased the website, but also had a big screen with TweetDeck showing the livetweeting of the conference by a few of us (e.g., @maninranks, @mistersugar, @IASP2009 and myself), gave out flyers explaining step-by-step how to start using Twitter, and gave hands-on instruction helping people get on Twitter and see what it can do for them.
You can look at the coverage by seaching Twitter for IASP or #IASP or #IASP2009 (in some of my first tweets I misspelled it as IESP). You can search FriendFeed for all the same keywords as well, as most of the tweets got imported there.
IASP2009 002.jpgScience In The Triangle is an online portal for news (and stories about news) about scientific research happening in the Triangle region of North Carolina. This is a way to keep the researchers in the Triangle informed about each other’s work and local science-related events, as well as a way to highlight the local research efforts for the outside audience. It is in its early stages but we’ll work on developing it more. It will be interesting to compare it to other similar portals, like NSF Science Nation and The X-Change Files (neither one of which is regional in character).
Apart from that, what was I doing at this kind of conference? Frankly, I never really thought about science/technology parks much until now. I have spent the last 18 or so years inside the sphere of influence of the Research Triangle Park and was blissfully unaware of the existence of any others. I did not know that there were dozens, perhaps hundreds of such parks around the world (including – and I should have known about it – the Technology Park Ljubljana in Slovenia). I did not know they had an official international association. I did not know that there are specified criteria as to what makes a park a park. Or that, for instance, the NCSU Centennial Campus is considered a science/technology park and is itself a member of the Association, outside of its proximity to RTP. Just learning these things was enlightening in itself, besides the actual susbtance of the talks.
IASP2009 003.jpgThis was also an interesting cultural experience for me. For the past several years, my conference experiences were mainly unconferences or very laid back and friendly science conferences. This was not it. This was a formal, old-style business conference with about a thousands businessmen (and very few women) wearing business suits. Even I felt compelled to wear a coat and tie and uncomfortable shoes to fit in. I also forgot that the program itself was bound to be much more formal. A session called a “panel” does not mean that 3-4 people on the stage will vigorously discuss a topic between themselves and with the vocal audience, but that 3-4 people will give PowerPoint presentations in rapid succession.
But I am not complaining – I know that the business world is even slower to evolve than the science world and that the tech world is light years ahead – as some of those talks were quite interesting and enlightening (see all the Tweets from the various sessions), and some of the people I met (and I knew almost nobody there in the beginning) were interesting as well. For example, Will Hearn prospects the sites for potential corporate (or industrial park) development around the world, from Peru to Macedonia. He runs Site Dynamics and has developed software – SiteXcellerator – that provides important information about the population, education and economics of any place on earth the company may be interested in moving into.
So, what is a science/technology/industrial park? It is a place. Seriously, and importantly, it is a place, in a geographical sense. It is a piece of land which houses a collection of science, technology, business and industrial companies and organizations, all placed together because they can potentially collaborate.
IASP2009 004.jpgThe location of the Park is chosen for being conducive to business (nice tax breaks by the state, for instance) and for containing well-educated and skilled workforce in abundance (thus usually close to a big university). In some cases, the companies go where the skilled potential employees already are. In other cases, the Park members build educational institutions needed to produce skilled workforce out of the local population (so, for example, if biotech moves into a place abandoned by the textile industry, a college needs to be built to re-train the local workers for the new and more high-tech jobs). Some parks are focused on a single industry (e.g., pharmaceutical, or even defense in some places) or even a single product (e.g., solar panels), while other parks are drawing together a variety of different fields.
More than 20 Parks received silver medals last night for existing 25 or more years. So this is not a new fad in any way. Research Triangle Park got a gold medal as it celebrated 50 years of existence this year. It is the largest and the oldest of the science/technology parks in the world, at least among the members of the Association.
But I had a nagging thought in the back of my head. Is RTP really the oldest?
IASP2009 005.jpgSo last night when I got home I went online and started searching for “science cities” or Naukograds of the old Soviet Union. The most famous is Akademgorodok which is possibly a couple of years older than RTP as the Wikipedia page states “in the 1950s” (RTP was founded in 1959) and this NYTimes article from 1996 says “four decades ago” which places its foundation at around 1956.
By the way, that NYTimes article (I actually remember reading it at the time – the heady old days of reading newspapers on paper!) is quite interesting. But take it with a big grain of salt – remember the source and the time: the New York Times in the 1990s was essentially Clinton’s Pravda, especially in the area of foreign politics, so any article about a foreign country is automatically suspicious. I’d love to see a blog post on the topic by a respected Russian blogger, either as an antidote or as a confirmation of that NYTimes article.
I would love to see someone cover the history and evolution of Naukograds, their strengths and weeknesses, ups and downs, during the Soviet era, during Perestroika, and in modern Russia, as well as the science cities that are now located in other countries that gained independence from USSR during the 1990s. That would be quite a teaching moment for everyone involved, I’m sure. If I am reading it correctly, these science cities conform to the criteria of being science/technology parks as the Association of Science Parks defines them. I wonder if any one of them actually became members of the Association since then?
IASP2009 006.jpgBack to the RTP now. Plenary talk by the Duke University President Richard Brodhead confirmed some of the history and some thoughts I had about the importance of RTP in North Carolina history (I missed the talk by Andrew Witty, but Anton blogged it and it appears to have been along similar lines).
UNC-Chapel Hill is the oldest state university in the country. NCSU and Duke are also very old. There are a couple of dozen smaller universities and colleges (and a couple of amazing high schools for kids with math and science talent) in the Triangle. And other science-related organizations. Then, there are many more schools around the state. From what I gather, each one of those schools was on its own, pretty isolated from each other. Researchers at one school were not aware of the researchers at another.
Then, some enlightened people in the state decided that with all those schools churning out all those educated graduates, the jobs for those graduates should also be in North Carolina. Why teach them here, just to see them leave for the greener pastures? And thus the RTP was born.
IASP2009 007.jpgIt took a while for the park to grow, but it attracted or spawned some powerful and creative organizations, from RTI and Glaxo and NIEHS, through IBM and SAS, to Sigma Xi and NESCENT and Lulu.com. Instead of educated people leaving the state in search of jobs, the companies started bringing jobs to where the educated people are – and there was one place to move to: the Research Triangle Park or as close to it as possible.
But that was not the end of the story. All those companies and organizations started collaborating with (or hiring) the researchers from local universities and….this brought people from different Universities together! People from NCSU and Duke and UNC and other schools got introduced to each other this way and started collaborative research with each other. Soon formal and informal collaborations between schools and departments were put in place. As a result, science in the state boomed. Instead of isolated nodes, there was now a network.
And I still think this network is pretty unique in the USA. In other places known for top-level science all of it is concentrated in one big city (e.g., New York City, Boston, Atlanta, or San Francisco) while the countryside of the state has nothing of the sort. And if one adds the historical rivalries between those old Universities in these cities, there is not much network effect there. Much more competitive than collaborative. But in North Carolina there are long-term ongoing collaborations between researchers, departments and schools all across the state, from Wilmington and Greenville, to Triangle, Winston-Salem and Greensboro, to Charlotte, Davidson, Cullowhee, Boone and Asheville (which is why I argued that the Nature Network Triangle Group should be renamed North Carolina Group which you should join if you are in NC and interested in science). Just look at how geographically dispersed the NC science bloggers are, if that is any indication.
IASP2009 008.jpgThus RTP, besides bringing together researchers and industry (which some of us purist scientists may not like that much) also inadvertently spurred on the advancement of pure, basic research. RTP provided the central place that connected all the schools and then people in those schools could make their own connections and do whatever kind of research they liked, even the most basic kind that does not have an obvious and immediate application (of course, long-term, all that basic knowledge ends up being applied to something, it is just impossible to predict at the time what the application could be).
There are other effects of this rise in science and technology in the state. Instead of graduates leaving the state, people from other places started coming in (look at licence plates on cars on I-40, or parked on campuses – you can see everything from Ohio and Michigan and Georgia to New York and California and Alaska), further increasing the concentration of highly educated people. The knowledge and education and expertise are regarded quite highly. Thus, the reality-based party has been in charge of state politics for a long time and last year even the national offices were deemed important enough for locals to vote out the anti-science party.
But that was last 50 years. That is 20th century world. How about today and tomorrow, now that everything about the world is changing: economics, communication, environmental awareness, even the mindset of the new generations?
You know that the rapid changes in the workplace are one of my ‘hot’ topics here. How will the information revolution affect parks (and the domino effect downstream from parks – universities, jobs, infrastructure)?
This is the moment to introduce the most interesting presentation at the meeting, by Anthony Townsend. Anthony is the Research Director at The Institute for the Future, focusing on Science In Action as well as coworking. I tried to get him in touch with Brian Russell of Carrboro Creative Coworking (and if you are in the Triangle area, but Carrboro is too far for you, fill out this survey about the potential need for coworking spaces in other parts of the Triangle), but I am not sure if they found time to meet this week. For this meeting, Anthony wrote a booklet – Future Knowledge Ecosystems: The Next Twenty Years of Technology-Led Economic Development – which you can download as a PDF for free, and I think you should.
His talk was a firehose – I tried live-tweeting it, but even the speed of Twitter was not fast enough for that. I am sure happy that he mentioned PLoS (and showed the homepage of PLoS Genetics on one of his slides) as an example of the new global, instantenuous mechanism of dissemination of scientific information. But I digress…
The main take-home message I got from his talk is that the world has fundamentaly changed over the past decade or so and that science/technology parks, old or new, need to adapt to the new world or die. He provided three possible scenarios. In the first scenario, parks evolve gradually, adapting, with some delay, to fast but predictable changes. In the second scenario, the physical space of the park becomes obsolete as research and connections move online – the park dies. In the third scenario, the parks are forced to adapt quickly and non-incrementally – inventing whole new ways of doing research combining physical and virtual worlds.
A traditional Park is a place where different companies occupy different buildings. The interaction is at the company-to-company level.
A new Park may become a giant coworking space, where the interactions are at the individual-to-individual level. Anthony actually showed a slide of a Park in Finland where a building (or a large floor of a large building) was completely re-done and turned into a coworking space.
As more and more people are turning to telecommuting and coworking, the institution of a physical office is becoming obsolete. What does that mean?
Employees are happy because they can live where they like. They get to collaborate with who they find interesting and useful, not who their corporation also decided to employ. They don’t have to see their bosses or coworkers every day (or ever). Everything else can be done in cyberspace.
What does the company gain from it? Happy productive employees. No echo-chamber effect stemming from employees only talking to each other. Ability to hire employees who are the best in what they do and, knowing that, unwilling to move from the place on Earth they like to live in. Employees who are always up to the latest trends and industry gossip due to constant mingling with the others. Free PR wherever the employees are located – they all meet the locals and answer the inevitable question “What do you do?”
The phenomenon of company loyalty is quickly fading. Most people do not expect to work for a single employer all their lives. They will work several jobs at a time, changing jobs as needed, sometimes jumping from project to project. Getting together with other people in order to get a particular job done, then moving on.
This will actually weaken the corporation – making everyone but controlling CEOs and CFOs happy. Corporations will have to become fluid, somewhat ad hoc, very flexible and adaptable, as their people will be constantly circulating in and out.
In such a world, a sci/tech Park will have to be where the people want to live – a nice place, with nice climate, nice culture, good school systems, safe, etc.
In such a world, single-payer health-care system that is not disbursed via the employer will become a necessity.
Sure, there is the production part of companies that actually produce ‘stuff’ and they will also be in such Parks so their own (and other’s) blue-collar and white-collar employees can routinely interact (Anthony showed a slide of a production line literally weaving through the offices, forcing workers, bureacrats and R&D personel to get to know each other and watch each other at work, thus coming to creative solutions together).
As for science itself, a Park may be something like a Science Motel, a place where both affiliated and freelance scientists can come together, exchange ideas and information, work together, use common facilities and equipment, regardless of who their official employers are and where those are located geographically.
Sounds like something out of science fiction? That kind of future is right around the corner.
The interaction at the organization-to-organization level, the cornerstone of Parks and the local economic growth in the 20th century, has been discussed quite a lot at the meeting, as expected. Especially collaboration between Universities and industry (and sometimes government).
The park-to-park level of interaction, essential in the global economy, and the reason the Association of Science Parks exists, was discussed quite a lot, of course.
IASP2009 009.jpgBut I did not get the vibe that the level of individual-to-individual was on many people’s minds. The idea that the corporation will have to get less coherent and/or hiererchical because the new generations will insist on the individual-to-individual collaboration did not get much air play. The time when people realize that information wants to be free and that, with current technology, it can be set free, is a challenge to corporations, especially for keeping trade secrets. Perhaps there will be less trade secrets as the new mindset sets in – a network can do more and better than competing units.
This was brought splendidly to us all last night, at the end of the Gala (no, not the amazing Tri-chocolate mousse dessert!), by theCarolina Ballet dancers, each bringing his or her own individual skills and talent (and it was visible that they all differ – some are more athletic, some more elegant, the #1 ballerina is top-world-class) and working together to produce a collective piece of beauty.

Science Online London 2009

You have proven your fitness, evolutionarily speaking, not when you have babies, but when your babies have babies. So I am very excited that my babies – the three science blogging conferences here in the Triangle so far – have spawned their own offspring. Not once, but twice. The London franchise will happen again this year. And just like we changed the name from Science Blogging Conference into ScienceOnline, so did they.
scienceonlinelondon logo.gifScience Online London 2009 will take place on Saturday August 22, 2009 at the Royal Institution of Great Britain in London, co-hosted by Nature Network, Mendeley and the Royal Institution of Great Britain. The organizers are Matt Brown (Nature Network), Martin Fenner (Hannover Medical School), Richard P. Grant (F1000), Victor Henning (Mendeley), Corie Lok (Nature Network) and Jan Reichelt (Mendeley).
To help build the program, suggest speakers/sessions, to register and organize your trip, or to participate virtually, you should join the Nature Network forum and the FriendFeed room. Follow @soloconf and the #soloconf_09 hashtag on Twitter.
There will probably be a registration fee to cover the costs, likely in the range of £10, so nothing really expensive. They are also looking for sponsors. And if anyone wants to sponsor my trip, I’ll go ;-)

Article-Level Metrics (at PLoS and beyond)

Pete Binfield, the Managing Editor of PLoS ONE, presented a webinar about article-level metrics to NISO – see also the blog post about it:

Commenting on scientific papers

There have been quite a few posts over the last few days about commenting, in particular about posting comments, notes and ratings on scientific papers. But this also related to commenting on blogs and social networks, commenting on newspaper online articles, the question of moderation vs. non-moderation, and the question of anonymity vs. pseudonymity vs. RL identity.
You may want to re-visit this old thread first, for introduction on commenting on blogs.
How a 1995 court case kept the newspaper industry from competing online by Robert Niles goes back into history to explain why the comments on the newspaper sites tend to be so rowdy, rude and, frankly, idiotic. And why that is bad for the newspapers.
In Why comments suck (& ideas on un-sucking them), Dan Conover has some suggestions how to fix that problem.
Mr. Gunn, in Online Engagement of Scientists with the literature: anonymity vs. ResearcherID tries to systematize the issues in the discussion about commenting on scientific papers which has the opposite problem from newspapers: relatively few people post comments.
Christina Pikas responds in What happens when you cross the streams? and Dave Bacon adds more in Comments?…I Don’t Have to Show You Any Stinkin’ Comments!
You should now go back to the analysis of commenting on BMC journals and on PLoS ONE, both by Euan Adie.
Then go back to my own posts on everyONE blog: Why you should post comments, notes and ratings on PLoS ONE articles and Rating articles in PLoS ONE.
Then, follow the lead set by Steve Koch and post a comment – break the ice for yourself.
Or see why T. Michael Keesey posted a couple of comments.
You may want to play in the Sandbox first.
I am watching all the discussions on the blog posts (as well as on FriendFeed) with great professional interest, of course. So, what do you think? Who of the above is right/wrong and why? Is there something in Conover’s suggestions for newspapers that should be useful for commenting on scientific papers? What are your suggestions?

My interviews with Radio Belgrade

Last year in May, when I visited Belgrade, I gave interviews with Radio Belgrade, talking about science publishing, Open Access, science communication and science blogging. The podcasts of these interviews – yes, they are in Serbian! – are now up:
Part 1
Part 2
I know that this blog has some ex-Yugoslavs in its regular audience, people who can understand the language. I hope you enjoy the interviews and spread the word if you like them.

Night, night, Ida…

Some 47 million years ago, Ida suffocated in the volcanic ashes. I feel the same way at the end of this week – I need to get some air. And some sleep.
But watching the media and blog coverage of the fossil around the clock for a few days was actually quite interesting, almost exhilarating – and there are probably not as many people out there who, like me, read pretty much everything anyone said about it this week. Interestingly, my own feel of the coverage was different if I assumed an angle of a scientist, an angle of an interested student of the changes in the media ecosystem, and an angle of a PLoS employee. It is far too early to have any clear thoughts on it at this point.
But if you want to catch up with me, I have put together a sampling of the blog and media coverage over on the everyONE blog.

Call for articles: User-led Science, Citizen Science, Popular Science

A special issue of JCOM, Journal of Science Communication, has just issued a call for submissions, with the deadline moved to June 1st, 2009:

Science is increasingly being produced, discussed and deliberated with cooperative tools by web users and without the institutionalized presence of scientists. “Popular science” or “Citizen science” are two of the traditional ways of defining science grassroots produced outside the walls of laboratories. But the internet has changed the way of collecting and organising the knowledge produced by people – peers – who do not belong to the established scientific community. In this issue we want to discuss:
- How web tools are changing and widening this way of participating in the production of scientific knowledge. Do this increase in participation consist in a real shift towards democratizing science or on the contrary is merely a rhetoric which do not affect the asymmetrical relationships between citizens and institutions?
- The ways in which both academic and private scientific institutions are appropriating this knowledge and its value. Do we need a new model to understand these ways of production and appropriation? Are they part of a deeper change in productive paradigms?
We would like to collect both theoretical contributions and research articles which address for example case studies in social media and science, peer production, the role of private firms in exploiting web arenas to collect scientific/medical data from their costumers, online social movements challenging communication incumbents, web tools for development.
Interested authors should submit an extended abstract of no more than 500 words (in English) to the issue editor by May 15, 2009. We will select three to five papers for inclusion in this special issue. Abstracts should be sent to the JCOM’s editorial office (jcom-eo@jcom.sissa.it) by email and NOT via the regular submission form.

You may remember that I mentioned this journal before. Of course, if you look closely, you’ll find an article by me, and two articles that mention me in there – all written during or right after my visit to Trieste last year. So, start writing!

Support the UCLA Pro-Test tomorrow and get educated about the use of animals in research

The UCLA Pro-Test is tomorrow. If you live there – go. If not, prepare yourself for inevitable discussions – online and offline – by getting informed. And my fellow science bloggers have certainly provided plenty of food for thought on the issue of use of animals in research.
First, you have to read Janet Stemwedel’s ongoing series (5 parts so far, but more are coming) about the potential for dialogue between the two (or more) sides:
Impediments to dialogue about animal research (part 1).:

Now, maybe it’s the case that everyone who cares at all has staked out a position on the use of animals in scientific research and has no intentions of budging from it. But in the event that there still exists a handful of people who are thinking the issues through, or are interested in understanding the perspectives of those who hold different views about research with animals — in the event that there are still people who would like to have a dialogue — we need to understand what the impediments to this dialogue are and find ways to work around them.

Impediments to dialogue about animal research (part 2).:

Research with animals seems to be a topic of discussion especially well-suited to shouting matches and disengagement. Understanding the reasons this is so might clear a path to make dialogue possible. Yesterday, we discussed problems that arise when people in a discussion start with the assumption that the other guy is arguing in bad faith. If we can get past this presumptive mistrust of the other parties in the discussion, another significant impediment rears its head pretty quickly: Substantial disagreement about the facts.

Impediments to dialogue about animal research (part 3).:

As with yesterday’s dialogue blocker (the question of whether animal research is necessary for scientific and medical advancement), today’s impediment is another substantial disagreement about the facts. A productive dialogue requires some kind of common ground between its participants, including some shared premises about the current state of affairs. One feature of the current state of affairs is the set of laws and regulations that cover animal use — but these laws and regulations are a regular source of disagreement: Current animal welfare regulations are not restrictive enough/are too restrictive.

Impediments to dialogue about animal research (part 4).:

As we continue our look at ways that attempted dialogues about the use of animals in research run off the rails, let’s take up one more kind of substantial disagreement about the facts. Today’s featured impediment: Disagreement about whether animals used in research experience discomfort, distress, pain, or torture.

Impediments to dialogue about animal research (part 5).:

Today we discuss an impediment to dialogue about animals in research that seems to have a special power to get people talking past each other rather than actually engaging with each other: Imprecision about the positions being staked out. Specifically, here, the issue is whether the people trying to have a dialogue are being precise in laying out the relevant philosophical positions about animals — the position they hold, the position they’re arguing against, the other positions that might be viable options.

Also check Janet’s older posts on the topic.
Mark C. Chu-Carroll:
Can simulations replace animal testing? Alas, no.:

I don’t want to get into a long discussion of the ethics of it here; that’s a discussion which has been had hundreds of times in plenty of other places, and there’s really no sense repeating it yet again. But there is one thing I can contribute to this discussion. One of the constant refrains of animal-rights protesters arguing against animal testing is: “Animal testing isn’t necessary. We can use computer simulations instead.”
As a computer scientist who’s spent some time studying simulation, I feel qualified to comment on that aspect of this argument.
The simplest answer to that is the old programmers mantra: “Garbage in, Garbage out”.
To be a tad more precise, like any other computer program, a simulation can only do what you tell it to. If you don’t already know how something works, you can’t simulate it. If you think you know how something works but you made a tiny, miniscule error, then the simulation can diverge dramatically from reality.

DrugMonkey:
Tilting at Animal Rights Activist Windmills:

As we are in the midst of a traditional week-o-ARA-wackaloonery and two days away from the first US Pro-Test rally (at UCLA) this is all highly topical. Why not take some time to do a little bit of reading and thinking about these issues? After all, it is only the continued health and well being of yourself, your family, your friends and neighbors that is at stake.

Virtual IACUC: Reduction vs. Refinement:

One of the thornier problems in thinking about the justification of using animals is when two or more laudable goals call for opposing solutions. For today’s edition of virtual IACUC we will consider what to do when Refinement calls for the use of more animals, in obvious conflict with Reduction.

FBI Places Alleged ARA Terrorist on Most Wanted List:

The important thing is the setting of priority. These acts, like the March 2009 bombing of neuroscientist J. David Jentsch’s car, are fundamental crimes against our rule of law as well as being a specific attack on scientific progress and the development of life-saving medical advances. With this announcement, and all of the publicity and news surrounding the UCLA Pro-Test rally in support of animal research scheduled for tomorrow…well, at the very least the wind has been taken out of the ARA sails during one of their big PR weeks.

Also check older posts on the topic on the DrugMonkey blog.
Speakingofresearch:
Why are we marching?:

At a banner making session today (Monday) I decided to ask a few people why they were planning on attending Wednesday’s rally. Here are a handful of responses I got:

Scientists dare to defend research:

As students and scientists at UCLA stand up to support lifesaving medical research, researchers at other institutions are offering their support for the cause. From Wake Forest University to the University of Arizona, from UC Davis to the University of South Dakota, researchers from across the United States have been united in their support for UCLA Pro-Test.

Nick Anthis:
New UCLA Pro-Test Chapter Announces April 22nd Rally:

Unfortunately, researchers at UCLA have become a major target of animal rights extremists over the last few years. This has included various incidents of destruction of property aimed at specific scientists, and this has coincided with a general rise in animal rights extremist activity in the US.

Also check Nick’s older posts on the topic.
You can also see what I have written in the past on this topic.
Here is the official NIH Statement Deploring Terrorism Against Researchers:

It is important that everyone know that all animals used in federally-funded research, are protected by laws, regulations, and policies to ensure they are used in the smallest numbers possible and with the greatest commitment to their comfort and welfare. The search for cures for devastating diseases depends on cumulative evidence gained from quality research. The appropriate use of animals in medical research has enabled the development of successful therapies and preventive measures for a wide- range of human diseases such as polio, Parkinson’s disease, and hepatitis A and B.

Check out the UCLA Pro-Test page and show yoru support (and get informed) by joining the UCLA Pro-test Facebook group and the more general Pro-Test – Supporting Animal Research group.

ScienceOnline’09 – Saturday 4:30pm and beyond: the Question of Power

scienceonline09.jpg
I know it’s been a couple of months now since the ScienceOnline’09 and I have reviewed only a couple of sessions I myself attended and did not do the others. I don’t know if I will ever make it to reviewing them one by one, but other people’s reviews on them are under the fold here. For my previous reviews of individual sessions, see this, this, this, this and this.
What I’d like to do today is pick up on a vibe I felt throughout the meeting. And that is the question of Power. The word has a number of dictionary meanings, but they are all related. I’ll try to relate them here and hope you correct my errors and add to the discussion in the comments here and on your own blogs.
Computing Power
Way back in history, scientists (or natural philosophers, as they were called then), did little experimentation and a lot of thinking. They kept most of their knowledge, information and ideas inside of their heads (until they wrote them down and published them in book form). They could easily access them, but there was definitely a limit to how much they could keep and how many different pieces they could access simultaneously.
A scientist who went out and got a bunch of notebooks and pencils and started writing down all that stuff in an organized and systematic manner could preserve and access much more information than others, thus be able to perform more experiments and observations than others, thus gaining a competitive advantage over others.
Electricity and gadgets allowed for even more – some degree of automation in data-gathering and storage. For instance, in my field, there is only so much an individual can do without automation. How long can you stay awake and go into your lab and do measurements on a regular basis? I did some experiments in which I did measurements every hour on the hour for 72 hours! That’s tough! All those 45min sleep bouts interrupted by 15min times for measurements, even as a couple of friends helped occasionally, were very exhausting.
But using an Esterline-Angus apparatus automated data-gathering and allowed researchers to sleep, thus enabling them to collect long-term behavioral data (collecting continuous recordings for weeks, months, even years) from a large number of animals. This enabled them to do much more with the same amount of time, space, money and manpower. This gave them a competitive advantage.
But still, Esterline-Angus data were on paper rolls. Those, one had to cut into strips, glue onto cardboard, photograph in order to make an actograph, then use manual tools like rulers and compasses and protractors to quantify and calculate the results (my PI did that early in his career and kept the equipment in the back room, to be shown to us whenever we complained that we were asked to do too much).
Having a computer made this much easier: automated data-collection by a computer, analyzed and graphed on that same computer, inserted into manuscripts written on that same computer. A computer can contain much more information than a human brain and, in comparison to notebooks, it is so much easier and quicker to search for and find the relevant information. That was definitely a competitive advantage as one could do many more experiments with the same amounts of time, space, money and manpower.
Enter the Web: it is not just one’s own data that one can use, but also everyone else’ data, information, ideas, publications, etc. Science moves from a collection of individual contributions to a communal (and global) pursuit – everyone contributes and everyone uses others’ contributions. This has a potential to exponentially speed up the progress of scientific research.
For this vision to work, all the information has to be freely available to all as well as machine-readable – thus necessity of Open Access (several sessions on this topic, of course) and Open Source. This sense of the word Power was used in sessions on the ‘Semantic Web in Science’, the ‘Community intelligence applied to gene annotation’, and several demos. Also, in the session on ‘Social Networking for Scientists’, this explains why, unlike on Facebook, it is the information (data) that is at the core. Data finds data. Subsequently, people will also find people. Trying to put people together first will not work in science where information is at the core, and personalities are secondary.
Power Relationships
In the examples above, you can already see a hierarchy based on power. A researcher who is fully integrated into the scientific community online and uses online databases and resources and gives as much as he/she takes, will have an advantage over an isolated researcher who uses the computer only offline and who, in comparison, has a competitive advantage over a person who uses mechanical devices instead of computers, who in turn does better than a person who only uses a pencil and paper, who beats out the guy who only sits (in a comfy armchair, somewhere in the Alps) and thinks.
Every introduction of new technologies upsets the power structure as formerly Top Dogs in the field may not be the quickest to adopt new technologies so they bite the dust when their formerly lesser colleagues do start using the new-fangled stuff. Again, important to note here, “generation” is a worldview, not age. It is not necessarily the young ones who jump into new technologies and old fogies do not: both the people who are quick to adopt new ways and the curmugeons who don’t can be found in all age groups.
Let’s now try to think of some traditional power relationships and the way the Web can change them. I would really like if people would go back to my older post on The Shock Value of Science Blogs for my thought on this, especially regarding the role of language in disrupting the power hierarchies (something also covered in our Rhetorics In Science session).
People on the top of the hierarchy are often those who control a precious resource. What are the precious resources in science? Funding. Jobs. Information. Publicity.
Funding and Jobs
Most of the funding in most countries comes from the government. But what if some of that funding is distributed equally? That upsets the power structure to some extent. Sure, one has to use the funds well in order to get additional (and bigger) funds, but still, this puts more people on a more even footing, giving them an initial trigger which they can use wisely or not. They will succeed due to the quality of their own work, not external factors as much.
Then, the Web also enables many more lay people to become citizen scientists. They do not even ask for funding, yet a lot of cool research gets done. With no control of the purse by government, industry, military or anyone else except for people who want to do it.
Like in Vernon Vinge’s Rainbows End, there are now ways for funders and researchers to directly find each other through services ranging from Mechanical Turk to Innocentive. The money changes hands on per-need basis, leaving the traditional purse-holders outside the loop.
Information
As more and more journals and databases go Open Access, it is not just the privileged insiders who can access the information. Everyone everywhere can get the information and subsequently do something with it: use it in own research, or in application of research to real-world problems (e.g., practicing medicine), or disseminating it further, e.g., in an educational setting.
Publicity
In a traditional system, getting publicity was expensive. It took a well-funded operation to be able to buy the presses, paper, ink, delivery trucks etc. Today, everyone with access to electricity, a computer (or even a mobile device like a cell phone) and online access (all three together are relatively cheap) can publish, with a single click. Instead of pre-publication filtering (editors) we now have post-publication filtering (some done by machines, some by humans). The High Priests who decided what could be published in the first place are now reduced to checking the spelling and grammar. It is the community as a whole that decides what is worth reading and promoting, and what is not.
In a world in which sources can go directly to the audience, including scientists talking directly to their audience, the role of middle-man is much weakened. Journal editors, magazine editors, newspaper editors, even book editors (and we had a separate session on each one of these topics), while still having power to prevent you from publishing in elite places, cannot any more prevent you from publishing at all. No book deal? Publish with Lulu.com. No magazine deal? Write a blog. No acceptance into a journal? Do Open Notebook Science to begin with, to build a reputatiton, then try again. If your stuff is crap, people will quickly tell you and will tell others your stuff is crap, and will vote with their feet by depriving you of links, traffic, audience and respect.
You can now go directly to your audience. You can, by consistently writing high quality stuff, turn your own website or blog into an “elite place”. And, as people are highly unlikely to pay for any content online any more, everything that is behind a pay wall will quickly drop into irrelevance.
Thus, one can now gain respect, reputation and authority through one’s writing online: in OA journals, on a blog, in comment threads, or by commenting on scientific papers. As I mentioned in The Shock Value of Science Blogs post, this tends to break the Old Boys’ Clubs, allowing women, minorities and people outside of Western elite universities, to become equal players.
Language is important. Every time an Old Boy tries to put you down and tell you to be quiet by asking you to “be polite”, you can blast back with a big juicy F-word. His aggressive response to this will just expose him for who he is and will detract from his reputation – in other words, every time an Old Boy makes a hissy fit about your “lack of politeness” (aka preserving the status quo in which he is the Top Dog), he digs himself deeper and becomes a laughingstock. Just like Jon Stewart and Stephen Colbert do to politicians with dinosaur ideas and curmudgeon journalists who use the He Said She Said mode of reporting. It is scary to do, but it is a win-win for you long-term. Forcing the old fogies to show their true colors will speed up their decline into irrelevance.
Another aspect of the Power on the Web is that a large enough group of people writing online can have an effect that were impossible in earlier eras. For instance, it is possible to bait a person to ruin his reputation on Google. It is also possible to affect legislation (yes, bloggers and readers, by calling their offices 24/7, persuaded the Senators to vote Yes). This is a power we are not always aware of when we write something online, and we need to be more cognizant of it and use it wisely (something we discussed in the session about Science Blogging Networks: how being on such a platform increases one’s power to do good or bad).
The session on the state of science in developing and transition countries brought out the reality that in some countries the scientific system is so small, so sclerotic, so set in their ways and so dominated by the Old Boys, that it is practically impossible to change it from within. In that case one can attempt to build a separate, parallel scientific community which will, over time and through use of modern tools, displace the old system. If the Old Boys in their example of Serbia are all at the University of Belgrade, then people working in private institutes, smaller universities, or even brand new private universities (hopefully with some consistent long-term help from the outside), can build a new scientific community and leave the old one in the dust.
Education
Teachers used to be founts of knowledge. This was their source of power. But today, the kids have all the information at their fingertips. This will completely change the job description of a teacher. Instead of a source of information, the teacher will be a guide to the use of information: evaluation of the quality of information. Thus, instead of a top-down approach, the teachers and students will become co-travellers through the growing sea of information, learning from one another how to navigate it. This is definitely a big change in power relationship between teachers and their charges. We had three sessions on science education that made this point in one way or another.
And this is a key insight, really. Not just in education, but also in research and publishing, the Web is turning a competitive world into a collaborative world. Our contributions to the community (how much we give) will be more important for our reputation (and thus job and career) than products of our individual, secretive lab research.
Yet, how do we ensure that the change in the power-structure becomes more democratic and now just a replacement of one hierarchy with another?
Coverage of other sessions under the fold:

Continue reading

Semantic Enhancements of a Research Article

In today’s PLoS Computational Biology: Adventures in Semantic Publishing: Exemplar Semantic Enhancements of a Research Article:

Scientific innovation depends on finding, integrating, and re-using the products of previous research. Here we explore how recent developments in Web technology, particularly those related to the publication of data and metadata, might assist that process by providing semantic enhancements to journal articles within the mainstream process of scholarly journal publishing. We exemplify this by describing semantic enhancements we have made to a recent biomedical research article taken from PLoS Neglected Tropical Diseases, providing enrichment to its content and increased access to datasets within it. These semantic enhancements include provision of live DOIs and hyperlinks; semantic markup of textual terms, with links to relevant third-party information resources; interactive figures; a re-orderable reference list; a document summary containing a study summary, a tag cloud, and a citation analysis; and two novel types of semantic enrichment: the first, a Supporting Claims Tooltip to permit “Citations in Context”, and the second, Tag Trees that bring together semantically related terms. In addition, we have published downloadable spreadsheets containing data from within tables and figures, have enriched these with provenance information, and have demonstrated various types of data fusion (mashups) with results from other research articles and with Google Maps. We have also published machine-readable RDF metadata both about the article and about the references it cites, for which we developed a Citation Typing Ontology, CiTO (http://purl.org/net/cito/). The enhanced article, which is available at http://dx.doi.org/10.1371/journal.pntd.0​000228.x001 , presents a compelling existence proof of the possibilities of semantic publication. We hope the showcase of examples and ideas it contains, described in this paper, will excite the imaginations of researchers and publishers, stimulating them to explore the possibilities of semantic publishing for their own research articles, and thereby break down present barriers to the discovery and re-use of information within traditional modes of scholarly communication.

Related: Creative Re-Use Demonstrates Power of Semantic Enhancement:

A Review article published today in PLoS Computational Biology describes the process of semantically enhancing a research article to enrich content, providing a striking example of how open-access content can be re-used and how scientific articles might take much greater advantage of the online medium in future.
Dr. David Shotton and his team from Oxford University spent about ten weeks enriching the content of an article published in PLoS Neglected Tropical Diseases, the results of which can be seen online here.
The enhanced version includes features like highlighted tagging which you can turn on or off (tagged terms include disease names, organisms, places, people, taxa), citations which include a pop-up containing the relevant quotation from the cited article, document and study summaries, tag clouds and citation analysis…

Why eliminate the peer-review of baseline grants?

ResearchBlogging.orgAbout a week ago, my brother sent me a couple of interesting papers about funding in science, one in Canada, the other in the UK. I barely had time to skim the abstracts at the time, but thought I would put it up for discussion online and come back to it later. So I posted the link, abstract and brief commentary a few days ago to the article: Cost of the NSERC Science Grant Peer Review System Exceeds the Cost of Giving Every Qualified Researcher a Baseline Grant:

Abstract: Using Natural Science and Engineering Research Council Canada (NSERC) statistics, we show that the $40,000 (Canadian) cost of preparation for a grant application and rejection by peer review in 2007 exceeded that of giving every qualified investigator a direct baseline discovery grant of $30,000 (average grant). This means the Canadian Federal Government could institute direct grants for 100% of qualified applicants for the same money. We anticipate that the net result would be more and better research since more research would be conducted at the critical idea or discovery stage. Control of quality is assured through university hiring, promotion and tenure proceedings, journal reviews of submitted work, and the patent process, whose collective scrutiny far exceeds that of grant peer review. The greater efficiency in use of grant funds and increased innovation with baseline funding would provide a means of achieving the goals of the recent Canadian Value for Money and Accountability Review. We suggest that developing countries could leapfrog ahead by adopting from the start science grant systems that encourage innovation.

A long and interesting discussion ensued in the comments, with the author of the paper himself showing up and offering to send reprints to those who are interested. More discussion also happened on FriendFeed here and here.
Several other bloggers also posted about it, and discussions happened on their posts as well. T. Ryan Gregory posted about it both on his Nature Network blog Pyrenaemata and on his indy blog Genomicron.
Larry Moran was largely in agreement with the article, but some commenters were not, including Rosie Redfield whose comment motivated T. Ryan Gregory to post again, just to explain his disagreement with Rosie.
Jonathan Eisen pointed out to a related post of his and Cameron Neylon to a related post of his. Finally, Zen Faulkes used it as a starting point for three posts here,
here and here.
I have finally managed to find time to read the paper myself so I think I can say something semi-intelligent about it. It became obvious that many who commented have not actually read the paper, just the Abstract, and thus were not in the position to respond to it intelligently (the paper actually answers, clearly and in detail, all the questions and complaints voiced by the commenters). The abstract is just, …well, an abstract. The paper is full of thought-provoking ideas and really needs to be read in its entirety.
Finally, my brother showed up in the comments and I would like to use his comment as a starting point today. That is – once you read the actual paper (ask for a reprint if you cannot access it), the linked blog posts and comment threads. I’ll be right here, waiting for you to come back….
I am assuming that the Canadian funding system is not very dissimilar to that in many other countries, including the USA – there is a central governmental body that gets its budget from the government and uses committees of unpaid peer-reviewers to decide how the money will be allocated to the researchers. The paper explains in detail at least a dozen reasons why and how this system is flawed: how it stifles truly innovative science, repels students from entering science, disproportinately pushes women out of science, encumbers students and postdocs with tasks they are not supposed to be doing (e.g., clerical, or technical), introduces an element of uncertainty about one’s livelihood, gives universities excuses to completely get out of research funding, shafts teaching and outreach as criteria for promotion, etc. But the clincher, for politicos at least, is that this system costs more than if a set sum of about $30,000 (Canadian) was given to every academic employed by a Canadian university who asks in any given year.
Yes, giving every Canadian scientist who already has a job and a lab this small amount of money no-questions-asked, geared toward innovative exploratory research, costs the government less than going through the peer-review system that gives money to some and no money to others (not to mention the reinforcement of the Old Boys Club this way).
This does not mean, in their proposal, that all of the Canadian money earmarked for science would be given this way – this is still just a small part of it. If you have a big lab or do expensive research and need to apply for much bigger grants, that would be done by the traditional peer review. But in order to get to the point where you have a good proposal, you need to have some neat stuff done (the “preliminary data”). With the proposed system, that preliminary data can be really exciting or revolutionary, something that, as an initial proposal, would never fly by peers.
Would people send out proposals for crap? Some would, I’m sure, but that doesn’t matter. Most would not. Scientists are curious about nature and would like to test their hunches. Some will flop, some will be amazing – it is the latter that this new system is worth doing for, as they may never be done otherwise. Anyway, how many $5,000,000 grants produced amazing stuff? All? I.Don’t. Think.So.
Where does quality control come from? First, it already came from universities who hired these researchers out of hundreds of applicants for each position. Aren’t they going to trust those best-of-the-best they hired? Second, the research itself will be judged after it is done – at conferences, in journal articles, and in post-publication metrics (citations, downloads, online chatter, etc., including perhaps a Nobel Prize here and there). If it is not up to standards, $30,000 Canadian dollars is not a big price to pay, and even the negative or inconclusive results can be useful to others if the thinking is original. If it is up to standards or more, that person will now have something exciting to base a bigger grant proposal on.
This also goes back to something I like to rant about (oh yes, go read that again!) – the bandwagon of Big Science. Biology, for example, does not equal running gels (hmmm, that’s chemistry, isn’t it?). But many people are given that appearance. “No gels – no grants, no papers, no career” (yup, I was told that a few years ago). Unless you already have a big molecular lab, this small grant will not build you one. Instead, you can do some really cool stuff at other levels – from tissues up to ecosystems and everything in-between, including computer modeling. You can use it to travel to some jungle that has never seen a Westerner and see what species live there – not hypothesis-testing, exploratory and exciting, definitely useful, but not something that is easily funded with a current system. If your proposal includes research on live vertebrates, you first have to get an IACUC proposal, something that will take 6-9 months of extremely frustrating fighting and proposal-modification – getting an IACUC proposal is the toughest peer-review known to science: if they say Yes to your proposal, no other committee of peers can add any more wisdom to it. And if you decide to work on invertebrates – it is much cheaper.
Another paper looks at this from another perspective – four stages of science. The grants, especially the big ones, disproportionately target science in Stage 3. The small baseline grants would target primarily Stage 1, the exciting, innovative stage – and this is a Good Thing. They could also more easily fund research in Stage 2 and Stage 4, also a Good Thing – from the article:

In this article I propose the classification of the evolutionary stages that a scientific discipline evolves through and the type of scientists that are the most productive at each stage. I believe that each scientific discipline evolves sequentially through four stages. Scientists at stage one introduce new objects and phenomena as subject matter for a new scientific discipline. To do this they have to introduce a new language adequately describing the subject matter. At stage two, scientists develop a toolbox of methods and techniques for the new discipline. Owing to this advancement in methodology, the spectrum of objects and phenomena that fall into the realm of the new science are further understood at this stage. Most of the specific knowledge is generated at the third stage, at which the highest number of original research publications is generated. The majority of third-stage investigation is based on the initial application of new research methods to objects and/or phenomena. The purpose of the fourth stage is to maintain and pass on scientific knowledge generated during the first three stages. Groundbreaking new discoveries are not made at this stage. However, new ways to present scientific information are generated, and crucial revisions are often made of the role of the discipline within the constantly evolving scientific environment. The very nature of each stage determines the optimal psychological type and modus operandi of the scientist operating within it. Thus, it is not only the talent and devotion of scientists that determines whether they are capable of contributing substantially but, rather, whether they have the ‘right type’ of talent for the chosen scientific discipline at that time. Understanding the four different evolutionary stages of a scientific discipline might be instrumental for many scientists in optimizing their career path, in addition to being useful in assembling scientific teams, precluding conflicts and maximizing productivity. The proposed model of scientific evolution might also be instrumental for society in organizing and managing the scientific process. No public policy aimed at stimulating the scientific process can be equally beneficial for all four stages. Attempts to apply the same criteria to scientists working on scientific disciplines at different stages of their scientific evolution would be stimulating for one and detrimental for another. In addition, researchers operating at a certain stage of scientific evolution might not possess the mindset adequate to evaluate and stimulate a discipline that is at a different evolutionary stage. This could be the reason for suboptimal implementation of otherwise well-conceived scientific policies.

Now, the proposal in this paper is quite definitive about allowing only researchers employed by universities to apply for such grants. But my mind instantly started thinking about those outside. How about amateur scientists? How about people not affiliated with the academia? How about distributed citizen science projects? Those are usually Stage 1 or Stage 2 projects, attractive to a particular kind of researcher (myself included – don’t try to lure me into a big Stage 3 lab). If I wanted to get some crayfish or spiders (or even birds, if a local IACUC would let me) and do experiments at home, this kind of a small grant would be just ideal. Could I have a local University, or some peers, write a letter in support of my proposal? Would that fly?
The paper also mentions, in a couple of places, similarities and differences between peer-review of grants and peer-review of manuscripts, including the importance of Openness to science. In one place, it mentions new journals “where ideas may be published initially unreviewed, but anyone may append public discussions to each article”. I am hoping this refers to arXiv and Nature Precedings, or even the concept of Open Notebook Science, but it smells too much like one of the pernicious myths spread by the enemies of Open Access about PLoS ONE which is, as readers of this blog are aware, stringently peer-reviewed.
One thing that the article mentions is that the current granting system allows researchers to buy time for research away from their teaching time. They note this as bad for teaching, true, but there is another angle to it. As danah writes in regard to the new proposed NSF funding of qualitative research, this kind of work does not require much in terms of equipment, but much in terms of time. It is essential for people, especially in social sciences, who do qualitative research, to be able to buy the time they need to do their research correctly.
Oh, and I mentioned at the beginning that my brother sent me two papers, yet we talked here only about one of them. The other one, if you are interested in starting a whole new discussion, is this one: Life after death? The Soviet system in British higher education

Recent studies of British higher education (HE) have focused on the application of the principles of the ‘new managerialism’ in the public sector, ostensibly aimed at improving the effectiveness of research and teaching, and also on the increasing commercialisation of HE. This article examines HE management in the light of the historical experience of the Soviet system of economic planning. Analogies with the dysfunctional effects of the Soviet system are elaborated with regard to financial planning and the systems of quality control in academic research and teaching. It is argued that Soviet-style management systems have paradoxically accompanied the growing market orientation of HE, undermining traditional professional values and alternative models of engagement between HE institutions and the wider society.

A FriendFeed discussion has started. Read the entire paper before chiming in, of course – we are scientists here!
Gordon, R., & Poulin, B. (2009). Cost of the NSERC Science Grant Peer Review System Exceeds the Cost of Giving Every Qualified Researcher a Baseline Grant Accountability in Research, 16 (1), 13-40 DOI: 10.1080/08989620802689821

Eliminate peer-review of baseline grants entirely?

This is very interesting, referring to Canadian system:
Cost of the NSERC Science Grant Peer Review System Exceeds the Cost of Giving Every Qualified Researcher a Baseline Grant:

Using Natural Science and Engineering Research Council Canada (NSERC) statistics, we show that the $40,000 (Canadian) cost of preparation for a grant application and rejection by peer review in 2007 exceeded that of giving every qualified investigator a direct baseline discovery grant of $30,000 (average grant). This means the Canadian Federal Government could institute direct grants for 100% of qualified applicants for the same money. We anticipate that the net result would be more and better research since more research would be conducted at the critical idea or discovery stage. Control of quality is assured through university hiring, promotion and tenure proceedings, journal reviews of submitted work, and the patent process, whose collective scrutiny far exceeds that of grant peer review. The greater efficiency in use of grant funds and increased innovation with baseline funding would provide a means of achieving the goals of the recent Canadian Value for Money and Accountability Review. We suggest that developing countries could leapfrog ahead by adopting from the start science grant systems that encourage innovation.

I don’t know how that would work in the USA, and certainly would not work for big $million grants, but this is quite an interesting idea – skipping the peer-review of small grants entirely and just giving all the applicants a Baseline Grant. If they use the money well and are lucky with the result, they will have publications and data they can then use to apply for bigger grants. Some really cool and unusual, non-band-waggony stuff would get done that way. What do you think?

Are solo authors less cited?

Daniel Lemire asks this question when observing a fallacy voiced in an editorial:

…..only a small fraction of the top 100 papers ranked by the number of citations (17 of 100) were published by single authors…..a published paper resulting from collaborative work has a higher chance of attracting more citations.

You can discuss the fallacy if you want, but I am much more interested in the next question that Daniel asks – are solo authors and groups of authors inherently attracted to different kinds of problems, or if solo vs. group dynamics make some projects more conducive for solo work and others for group collaboration:

But the implication is that solo authors are less interesting. Instead, I believe that solo authors probably work on different problems. (Hint: This could be the subject of a study of its own!)
Why?
Because of something I call problem inertia. For collaboration to occur, several people must come together and agree to a joint project. Sometimes money is required to pay the assistants or the students. All of these factor means that small problems or risky problems will be ignored in favor of safe bets. To put it bluntly, Microsoft will not sell PHP plugins! Hence, statistically, teams must be deliberate and careful. Also, fewer problems can be visited: even if the selected problem is a bad one, changing the topic in mid-course might be too expensive.
An autonomous author can afford to take more risks. Even more so if he has a permanent position. This may explain why Peter Turney seems to believe that researchers lack ambition. They may simply be rational: if it takes you three weeks to even get started on a project, you cannot afford many false starts!

And he than quotes Seth Roberts:

One reason my self-experimentation was effective was it didn’t depend on grants. No matter what I found, no matter how strange or upsetting or impossible or weird the results might be, I could publish them and continue to investigate them.

So, what I think he means is that groups jump on bandwagons, and bandwagoners are more common, thus bandwagoners will cite other bandwagoners more. Solo authors can do weird stuff and only very few other people will work on the same stuff, or similar enough stuff to warrant a citation.
If thousands are studying process X in rats, they will tend to cite each other and easily get grants for collaborative work. They have little incentive to cite your work on that same process X that you study in the Platypus, and nobody else in the world studies it in the Platypus so there’s not a large group (or anyone) out there to cite your stuff. But if you find something really revolutionary in Platypus that cannot be discovered in the rat – then your high risk resulted in a huge payoff (not to mention you will get lots of invitations to give talks at meetings as the organizers will like to have someone ‘weird’ – “that Platypus guy, snicker” – attract their audience).
But for the progress of science, both types of research need to be done. And the lack of citations for risky single-author work should not be used as a measure of quality of that work or as impediment to career advances.
Agree or disagree?
Also, a discussion of this happened on FriendFeed.

Diffusion of Knowledge

Science Depends on the Diffusion of Knowledge:

According to the National Science Foundation, there are over 2.5 million research workers worldwide, with more than 1.2 million in the U.S. alone.1 If we look at all the articles, reports, emails and conversations that pass between them, we could count billions of knowledge transactions every year. This incredible diffusion of knowledge is the very fabric of science.
Given that the diffusion of knowledge is central to science, it behooves us to see if we can accelerate it. We note that diffusion takes time. Sometimes it takes a long time. Every diffusion process has a speed. Our thesis is that speeding up diffusion will accelerate the advancement of science.
The millions of researchers are grouped into thousands of communities. A community may be defined as a group of researchers working on a single scientific problem.
The Web of Science indexes about 8,700 journals2, representing many different research communities. That’s a lot of science to keep up with. Currently it is difficult for researchers, who primarily track journals within their specific discipline, to hear about discoveries made in distant scientific communities.
In fact, diffusion across distant communities can take years. In contrast, within an individual scientific community, internal communication systems are normally quicker. These include journals, conferences, email groups, and other outlets that ease communication.
Many communities use related methods and concepts: mathematics, instrumentation, and computer applications. Thus there is significant potential for diffusion ACROSS communities, including very distant communities. We see this as an opportunity.
Sequential Diffusion is Too Slow!
Diffusion to distant communities takes a long time because it often proceeds sequentially, typically spreading from the community of origin (A) to a neighbor (B), then to community (C), a neighbor of B, and so on. This happens because neighboring communities are in fairly close contact.
Science will progress faster if this diffusion lag time is diminished. The concept of global discovery is to transform this sequential diffusion process into a parallel process. This means that new knowledge flows directly to distant communities. The goal is to reduce the lag time from years to months and from months to days.
Modeling Knowledge Diffusion Suggests How to Accelerate It
In thinking about how to speed up diffusion across distant communities, we have looked at diffusion research, including computer modeling. We are particularly interested in recent work that applies models of disease dynamics to the spread of scientific ideas. The spread of new ideas in science is mathematically similar to the spread of disease, even though one produces positive results, the other negative. Our goal is to foster epidemics of new knowledge.
You might ask “Why is the math of disease related to the math of knowledge diffusion?” It is because neither involves considerations of conservation of mass. This makes disease and knowledge diffusion unlike many other kinds of diffusion that obey laws of conservation of mass. Consider, for example, diffusion of pollution. If pollution diffuses from point A to point B, point A now has less of it. But if knowledge diffuses from person X to person Y, person X still has what he started with.
We have been working with a group of modelers led by Luis Bettencourt of Los Alamos National Laboratory. They have written an important new paper, currently in press in Physica A: Statistical Mechanics and Its Applications, entitled: “The power of a good idea: quantitative modeling of the spread of ideas from epidemiological models.”3 This paper applies a disease model to the spread of Feynman diagrams just after World War II. Feynman diagrams are a central method of analysis in particle physics.4

If a paper is not available online, do you go to the trouble of finding a print copy?

Dorothea found an intriguing survey – If it’s not online… – in which physicists and astronomers say, pretty much, that ‘if an article is not online then it is not worth the effort to obtain it’.
An interesting discussion (with a couple of more links added by others) ensued here.
What do you assume if a paper is not online? Do you track it down anyway? What are your criteria for choosing to do so?

Why are scientists so HARD to move!?

The unmovable movers! Or so says Bill Hooker:

For instance: I use Open Office in preference to Word because I’m willing to put up with a short learning curve and a few inconveniences, having (as they say here in the US) drunk the Open Kool-Aid. But I’m something of an exception. Faced with a single difficulty, one single function that doesn’t work exactly like it did in Word, the vast majority of researchers will throw a tantrum and give up on the new application. After all, the Department pays the Word license, so it’s there to be used, so who cares about monopolies and stifling free culture and all that hippy kum-ba-yah crap when I’ve got a paper to write that will make me the most famous and important scientist in all the world?
———-snip————-
Researchers have their set ways of doing things, and they are very, very resistant to change — I think this might be partly due to the kind of personality that ends up in research, but it’s also a response to the pressure to produce. In science, only one kind of productivity counts — that is, keeps you in a job, brings in funding, wins your peers’ respect — and that’s published papers. The resulting pressure makes whatever leads to published papers urgent and limits everything else to — at best — important; and urgent trumps important every time. Remember the old story about the guy struggling to cut down a tree with a blunt saw? To suggestions that his work would go faster if he sharpened the saw, he replies that he doesn’t have time to sit around sharpening tools, he’s got a tree to cut down!
————snip————
I think that’s true, but like the guy with the saw, scientists are caught up in short-term thinking. Put the case to most of them, and they’ll agree about the advantages of Open over closed — for instance, I’ve yet to meet anyone who disagreed on principle that Open Access could dramatically improve the efficiency of knowledge dissemination, that is, the efficiency of the entire scientific endeavour. I’ve also yet to meet more than a handful of people willing to commit to sending their own papers only to OA journals, or even to avoiding journals that won’t let them self-archive! “I have a job to keep”, they say, “I’m not going to sacrifice my livelihood to the greater good”; or “that’s great, but first I need to get this grant funded”; or my personal favourite, “once I have tenure I’ll start doing all that good stuff”. (Sure you will. But I digress.)
—————snip————-
When it comes to scientists, you don’t just have to hand them a sharper saw, you have to force them to stop sawing long enough to change to the new tool. All they know is that the damn tree has to come down on time and they will be in terrible trouble (/fail to be recognized for their genius) if it doesn’t.

A vigorous discussion ensued. What do you think? Is it true that for scientists to adopt any new way of doing things, Carrots don’t work, only Big Sticks?

Meetings I’d like to go to….Part VIII

The Two Cultures in the 21st Century:

A full-day symposium sponsored by: Science & the City, ScienceDebate2008, Science Communication Consortium
At the 50th anniversary of C.P. Snow’s famous Rede Lecture on the importance to society of building a bridge between the sciences and humanities, this day-long symposium brings together leading scholars, scientists, politicians, authors, and representatives of the media to explore the persistence of the Two Cultures gap and how it can be overcome. More than 20 speakers will cover topics including science in politics, education, film and media, and science citizenship.

Exciting schedule and list of speakers/panelists!

World’s Biggest Scientific Fraud?

Wow! This is massive!
From Anesthesiology News:

Scott S. Reuben, MD, of Baystate Medical Center in Springfield, Mass., a pioneer in the area of multimodal analgesia, is said to have fabricated his results in at least 21, and perhaps many more, articles dating back to 1996. The confirmed articles were published in Anesthesiology, Anesthesia and Analgesia, the Journal of Clinical Anesthesia and other titles, which have retracted the papers or will soon do so, according to people familiar with the scandal (see list). The journals stressed that Dr. Reuben’s co-authors on those papers have not been accused of wrongdoing.

There is more about it in New York Times and the Wall Street Journal.
My SciBlings Orac, Janet and Mike have more details, thoughts on ethics and implications.
This case is Big!

Correlation is not causation: what came first – high Impact Factor or high price?

Bill decided to take a look:
Fooling around with numbers:

Interesting, no? If the primary measure of a journal’s value is its impact — pretty layouts and a good Employment section and so on being presumably secondary — and if the Impact Factor is a measure of impact, and if publishers are making a good faith effort to offer value for money — then why is there no apparent relationship between IF and journal prices? After all, publishers tout the Impact Factors of their offerings whenever they’re asked to justify their prices or the latest round of increases in same.
There’s even some evidence from the same dataset that Impact Factors do influence journal pricing, at least in a “we can charge more if we have one” kinda way. Comparing the prices of journals with or without IFs indicates that, within this Elsevier/Life Sciences set, journals with IFs are higher priced and less variable in price:

Fooling around with numbers, part 2:

The relationship here is still weak, but noticeably stronger than for the other two comparisons — particularly once we eliminate the Nature outlier (see inset). I’ve seen papers describing 0.4 as “strong correlation”, but I think for most purposes that’s wishful thinking on the part of the authors. I do wish I knew enough about statistics to be able to say definitively whether this correlation is significantly greater than those in the first two figures. (Yes yes, I could look it up. The word you want is “lazy”, OK?) Even if the difference is significant, and even if we are lenient and describe the correlation between IF and online use as “moderate”, I would argue that it’s a rich-get-richer effect in action rather than any evidence of quality or value. Higher-IF journals have better name recognition, and researchers tend to pull papers out of their “to-read” pile more often if they know the journal, so when it comes time to write up results those are the papers that get cited. Just for fun, here’s the same graph with some of the most-used journals identified by name:

Connections in Science

Web usage data outline map of knowledge:

When users click from one page to another while looking through online scientific journals, they generate a chain of connections between things they think belong together. Now a billion such ‘clickstream events’ have been analysed by researchers to map these connections on a grand scale.
The work provides a fascinating snapshot of the web of interconnections between disciplines, which some data-mining experts believe reveals the degree to which work that is not often cited — including work in the social sciences and humanities — is widely consulted and can form bridges between scientific disciplines…

You can see the amazing network (Fig.5) here

The Matthew effect in science

Douglas Kell: The Matthew effect in Science – citing the most cited:

The Matthew effect applies to journals and papers too – a highly cited journal or paper is likely to attract more citations (and mis-citations), probably for the simple psychological reasoning that ‘if so many people cite it, it must be a reasonable paper to cite’ (and such a paper is, by definition, more likely to appear in the reference list of another paper). Clearly that reasoning can be applied whether the paper has been read or otherwise. Simkin and Roychowdhury (2005 and 2007) note that a clear pointer to the citation of a paper one has not read is if it copies a mis-citation, and an analysis of the frequency of such serial mis-citations allows one to estimate, statistically, what fraction of cited articles have actually been read – at least at or near the time of writing a paper – by the citing author. Their analyses show (at least for certain physics papers) that “about 70-90% of scientific citations are copied from the lists of references used in other papers”, and that a typical device is to start with a few recent ones plus their citations. Some aspects of this tendency in bibliometrics, especially with highly cited papers, can be detected from the power law form of the distribution of citation numbers, as in the Laws of Bradford and Lotka that I discussed before. Of course the mindless propagation of errors without checking sources properly is hardly confined to Science – a famous recent example with spoof data showed how some journalists simply copied Obituary material from Wikipedia!

I know people do this. Drives me crazy! Every paper I ever cited I read and re-read and re-read. Heck, I even tried to slog through papers in German (which I don’t speak) if I thought they were relevant. But copy+paste just because others did? Nope.

A very brief history of plagiarism

Archy does an amazing detective job on who stole what from whom in the old literature on mammoths, going back all the way to Lyell!
Then, as much of that literature is very old, he provides us with a history and timeline of the ideas of copyright and plagiarism so we could have a better grasp on the sense of the time in which these old copy+paste jobs were done.

Nature Methods: It’s good to blog

Another editorial about science blogging today, this time in Nature Methods: Lines of communication:

The public likes science stories it can easily relate to, and we have to admit that most science, including that published in Nature Methods, is unlikely to get more than a snore from nonscientists. In contrast, science stories that have a human interest or other emotionally charged angle require the concerted efforts of both journalists and scientists to ensure that the public understands the story well enough to make an informed personal decision. A failure in this regard can lead to a crisis that is difficult to resolve.
——-snip———-
A powerful aspect of blogs is their capacity to put a human face on science and related health issues by allowing scientists to discuss how these things affect them personally in a format in which regular readers feel as though they know the writer. Analysis of the MMR vaccine incident suggests that emotional arguments like a scientist talking about vaccinating his or her own children might be more powerful than the rational arguments that form the basis of normal scientific discourse. The public’s emotional response to genetically modified food in some countries might also have been very different if people could see numerous online blog entries from scientists discussing why they were not concerned about the scenarios being promulgated in the press. But can enough scientists be convinced of the potential benefits of blogging to make this a reality?
Conferences such as Science Blogging 2008: London, organized by Nature Network, and ScienceOnline’09 are exploring the role of blogging in science and trying to get more scientists involved. Nature Network just concluded their Science Blogging Challenge 2008–won by Russ B. Altman–where the goal was to get a senior scientist to start blogging. Altman’s colleague Steve Quake also just started blogging in a guest stint for the New York Times. One hopes that examples of prominent scientists blogging will convince others of the benefits. When a blog author is not a prominent scientist with a reputation to maintain, the quality of information on the blog can be a concern, but scienceblog tracking sites such as http://blogs.nature.com/ can help alleviate this problem.

w00t for the mention of ScienceOnline09! I wish they also mentioned ResearchBlogging.org as a means to track good science blogging (mention of carnivals would be too much to expect from a short article like this, I understand).

In the spirit of leading by example, Nature Methods will convert its online commenting site, Methagora, into a proper blog in preparation for later this year when commenting capabilities will be incorporated into published papers. Methagora will allow us to highlight and comment on papers that we feel are of interest to a larger readership and discuss the impact we see them having on science and hopefully society. We invite you, our readers–scientists and nonscientists alike–to share your thoughts and concerns, including your thoughts on this editorial. See you in the blogosphere!

I am happy to hear this. I guess the PLoS ONE example is emboldening others to start the experiment as well. This is a Good Thing. More journals allow the commenting on the papers, more ‘normal’ this will appear to scientist, more quickly it will become normal for scientists to use this. You remember when Nature tried this experiment a couple of years ago, then quit and proclaimed the experiment to be a failure after only six months? When they did that, I was, like, WTF? Who ever expected such a big shift in the entire scientific culture to happen in six months?! But give it another five years and it will start getting there. And remember that a scientific paper is not a blog post – do not expect a bunch of comments over the first 24 hours: they will slowly accumulate over the years and decades.
Finally, let me just notice that both Nature and Nature Methods published pro-blog editorials on the same day. And they also interviewed me this week for a topical issue on the state of science journalism/communication they are planning for a couple of weeks from now. I don’t think this is a coincidence – Nature group is cooking something and we’ll have to wait and see what that is.

Diversity in Science Carnival #1 is amazing!

The very first, inaugural, and absolutely amazing edition of the Diversity in Science Carnival is now up on Urban Science Adventures. Wow! Just wow! Totally amazing stuff.
And what a reminder of my White privilege – a couple of names there are familiar to me, as I have read their papers before, never ever stopping to think who they were or how they looked like! What a wake-up call!
For instance, I have read several papers by Chana Akins, as she works on Japanese quail. And I am somewhat familiar (being a history buff and obsessive reader of literature in my and related fields) with the work of Charles Henry Turner, covered in this carnival not once but twice – both by Danielle Lee and by Ajuan Mance!
It also did not escape my notice that several of the posts are eligible for the next editions of Scientiae and The Giant’s Shoulders – double your readership by submitting those posts there as well!

Quick check-in from NYC

Mrs.Coturnix and I arrived nicely in NYC last night and had a nice dinner at Heartland Brewery. This morning, we had breakfast at the Hungarian Pastry Shop, where I ordered my pastry using a Serbian name for the cake, and the Albanian woman working in the Hungarian shop understood what I wanted! I forgot to bring my camera with me today, and Mrs.Coturnix did not bring her cable, so the pictures of the pastries will have to wait our return home.
Then, Mrs.Coturnix went for a long walk (it was nice in the morning, got cold in the afternoon), ending up in the Met. I joined my co-panelists Jean-Claude Bradley and Barry Canton and our hosts Kathryn Pope, Rebecca Kennison and Rajendra Bose for lunch at Bistro Ten 18.
Then we walked over to the Columbia campus and got all set up for the Open Science panel. I talked first, giving a brief history of openness in scientific communication, defining Open Access publishing and how it fits in the evolving ecosystem of online science communication, ending with some speculation about the future. Jean-Claude and Barry then followed, describing their own projects, showing how some of that future that sounds so speculative when described in general terms, is already here, done by pioneers and visionaries right here and now.
The panel was followed by a number of excellent questions from the audience – you could follow the discussion blow-by-blow on twitter (several pages of it!), and the video of the entire thing will be posted online in a few days (I will make sure to link to it once it is available).
There were some familar faces in the crowd – including Caryn Shechtman (who already wrote a nice blog post about it), my Overlords Erin and Arikia, Michael Tobis, Talia Page (and her Mom who is writing an interesting book right now), Noah Gray, Hilary Spencer and Miriam Gordon (whose husband does interesting stuff with science education in high schools).
We went for a beer nearby afterwards, where we were re-joined by Mrs.Coturnix. It got really cold, so we went back to the hotel, had some (too) authentic Chinese cuisine for dinner and are trying to rest as tomorrow is another busy day – meeting various famous people for various meals, including the Big Bash at Old Town Bar at 8pm to which you are all invited.