On alternative impact factors and “filtering after the fact”
The practice of web publishing (in the forms of blogs, microblogs etc) is characterized by a “publish, then filter” model, meaning that whoever can publish whatever without asking a publisher for permission or having to go through a selection process.
In a recent article by The Times Higher Education, they report from a debate at the British Library where academics and students agreed that “researchers had not embraced new technology to share their data and findings”. David Gauntlett, professor of media and communications at the University of Westminster, talked about the “publish, then filter” model in an academic context and in the comment section of the article, Gauntlett clarifies his stance:
The idea is that first of all, academics would (still) produce work to a standard that they think is publishable – they should be satisfied with it as a representation of their ideas and research.
Then they would post it online. The quality filtering then takes place post-publication; good and useful research will first of all be shared online more, by people who find it interesting and fruitful; and at a later stage we will also see that it has been cited more (a traditional measure to keep the traditionalists happy).
This is surely a better way to identify quality research than by the accepted method of asking two anonymous academics, who are bound to have a preference for themselves and their kind of work, rather than other people and other kinds of work, if it should be published or not. My proposal also avoids strangling unusual or non-traditional work before it has even seen the light of day.
A discussion between Gauntlett and a character called “Bee” continues:
[David Gauntlett:] Yes, I agree, you’d need some mechanisms to help highlight good work by the currently-unknowns. But that’s the kind of thing that the online world is relatively good at, especially compared with the world of academic journals.
You’re quite right that we’d need to focus on that — indeed, it’s one of the ways that the established journals and portals could continue to justify their existence, by trying to pick out good work by people you might otherwise not have heard of.
But it’s not an insurmountable problem.
Read the full article: Pressure to publish papers blamed for reluctance to share digital data.
Alt-metrics: a manifesto (altmetrics.org)
Gauntlett talks about “mechanisms to help highlight good work by the currently-unknowns” which brings me to the work carried out by (among others) Jason Priem, PhD student in Information and Library Science on the project Alt-metrics which is about tracking scholarly impact on the social web. Preem and his colleagues wants to track impact outside the academy, impact of influential but uncited work, and impact from sources that aren’t peer-reviewed. Doing so can also result in improved filtering systems:
Journal of X and Z, a MEDEA project dealing with the future of scholarly communication and academic publishing. This post has been cross-posted to the project website at wpmu.mah.se/medea-journal/.
Image credit: –Tico– CC:BY-NC-ND