Transformative Research is Not Easily Predicted

via Oregon State University

Research-funding agencies that require scientists to declare at the proposal stage how their projects will be “transformative” may actually be hindering discovery, according to a study by Oregon State University ecologists.

The requirement can result in decreased funding for the “incremental” research that often paves the way for paradigm-shifting breakthroughs, the OSU scientists assert.

Their findings, as well as their recommendation for how to best foster transformative research, were published recently in Trends in Ecology and Evolution.

Sarah Gravem, postdoctoral scholar in integrative biology in Oregon State’s College of Science, was the lead author on the paper, titled “Transformative Research is Not Easily Predicted.”

Gravem, integrative biology professor Bruce Menge and the other collaborators note that the National Science Foundation, which funds roughly one-quarter of the federally supported research at U.S. colleges and universities, “has made the pursuit of transformative research a top priority by asking for a transformative research statement in every major research proposal solicited.”

The NSF defines transformative research as being “driven by ideas that have the potential to radically change our understanding of an important existing scientific or engineering concept or leading to the creation of a new paradigm … . Such research is also characterized by its challenge to current understanding or its pathway to new frontiers.”

Gravem says asking scientists to attempt to create new paradigms or fields in every proposal is unrealistic and potentially harmful.

The OSU scientists argue that a better approach, and one that was suggested more than a decade ago by the board that oversees the National Science Foundation, would be to create a funding subset: a separate NSF-wide program to solicit and support transformational research proposals.

“The board had been concerned that the U.S. was lagging behind other countries in scientific advances, concerned that creative and risky research was not getting funding,” Menge said. “It concluded that what the NSF should do is set aside some funds for risky research proposals, those defined by reviewers as they may or may not work, the chances are sort of slim, but they could turn out to be pretty cool.”

What the NSF did instead, Menge said, was require all proposals to show how the research being proposed would be transformative.

“Instructions to reviewers include the expectation that the reviewer will comment on how transformative the proposed research is,” Menge added.

The problem, the Oregon State collaborators say, is that it’s rarely possible to know at the proposal stage whether a project will turn out to be transformative; their assertion follows interviews and surveys of 78 highly cited ecologists who began with incremental goals and only later realized the transformative potential of their work.

“To start out with that transformative question is a backward way of thinking,” Gravem said. “Surely you have to think big to come up with big answers, and everyone is striving for that, but truly transformative research is an unobtainable standard to place on people at the proposal stage. Trying to make every project paradigm shifting can mean ignoring the incremental and basic science that eventually goes into shifting paradigms. It’s a detriment to ignore the building blocks in favor of the building.”

Gravem said the necessity of incremental research was also explained recently on Freakonomics Radio.

“Economist Ed Glaeser noted that Nobel Prizes are not typically given for single transformative research papers but are often given for a body of incremental research,” she said. “If transformations arise from incremental research, then the transformative criterion is redundant with the solicitation of incremental research. This is reflected by mixed evidence that soliciting transformative research led to increases in transformative outcomes compared with the typical model.”

Expanding fields of knowledge, adding to bodies of evidence, and comparing two fields that haven’t been compared before are the types of gains researchers can reasonably predict, Gravem added. Being asked to forecast how a project will turn out to be transformative puts “researchers in an awkward position that nobody likes.”

“We’re being forced to hype our work at the beginning of a proposal, which doesn’t do anything to help science or to help build trust in science,” Gravem said. “And it turns the funding process into an essay competition that favors people who take more liberty in predicting what their research might show.”

Menge notes that NSF’s plan all along was to reassess the transformative research statement requirement at some point, “and now is the time.”

“Research funding is effectively decreasing, but the demand for funding is increasing, so they look for ways to prune the field of who gets funded – I recognize that as a problem,” he said. “But making artificial hurdles is just wrong. Funding agencies should concentrate on the goals of the research rather than the unknowable outcome.”

Learn more: ‘Transformative’ research unrealistic to predict, scientists tell granting agencies

The Latest on: Transformative research

via Google News and Bing News

Weaponised research: how to keep you and your sources safe in the age of surveillance

File 20170420 20071 1eohps3

Is someone watching while you work? Jay Moff/flickr Sara Koopman, University of Tampere

Surveillance has become so ubiquitous that it appears likely that Russia was caught in the act conspiring to fix the 2016 United States presidential election, and at least one of his staffers was basically overheard conspiring with them. The Conversation

Politicians aren’t the only ones being watched. Edward Snowden’s 2013 revelations detailing the US National Security Agency’s widespread surveillance have made clear that, these days, everyone should be thinking about privacy and security.

That includes academics, some of whom are undertaking sensitive, even dangerous, research. How can we work safely and ethically in an era of internet spying and wiretapping?

Weaponising your own research

This question is particularly salient for scholars who work on peace and justice organising: recent leaks confirm that the military (or the police) may not only be reading your published work – they could also be tracking your online activity, monitoring your whereabouts and even listening in on your conversations.

Exposed files from the IT security company the Hacking Team confirm that its software is widely used around the world to listen to ambient conversations held in a room with a cell phone, even when it is off.

That opens up the ethically distressing possibility that your research can be weaponised – used by armed actors do to harm.

Geographers are particularly vulnerable to this threat. In 2007, the American Anthropological Association denounced the US Army’s Human Terrain Systems, which embeds social scientists in military teams in Iraq and Afghanistan, as “an unacceptable application of anthropological expertise”. Since then, the US military’s attempts to know (and control) the so-called human terrain have shifted to geography.

Even the highly critiqued term “human terrain” has been widely replaced with the term “human geography”.

As a result, we see a fast-growing trend of geographers being offered military funding for research, often through front organisations such as the US Department of Defense’s Minerva Research Initiative.

The army’s new favour for geographers was reinforced when the American Association of Geographers (AAG) for years refused to take action on a military-related scandal. Researchers led by Peter Herlihy at the University of Kansas who were doing participatory mapping with indigenous groups in Oaxaca, Mexico, failed to disclose both their US military funding and the fact that they were thus sharing research findings with their donors.

That’s unethical anywhere, but it’s particularly problematic in Oaxaca: the US military likely shared that detailed GIS information about Zapoteco communities with the Mexican military, which has long repressed those indigenous communities.

In early April 2017, the AAG finally agreed to form a study group to examine the issue of ties between their discipline and the US, UK and NATO armed forces.

Research hack

Even if you’re an academic who doesn’t accept military funding, your findings may already have been added to the military’s huge databases without you knowing it (the citation is unlikely to come up in a Google Scholar search).

Karen Morin of Bucknell University, for example, discovered that her chapter on interpreting landscape had been cited in a Marines operational guide. Its subject: reading the cultural landscape correctly can enable troops to immediately control a population upon arrival.

You never know who’s listening in.
EUCOM

It is very hard to track down this sort of misappropriation of your work. But you can keep it in mind when publishing. Ask yourself: who might want this information, and could it in any way be used to do harm?

Academics should also be aware that unpublished research data can also be hacked. I found this out the hard way, when the email account of the Fellowship of Reconciliation, a group that I was doing research with and regularly emailing, was hacked by Colombian intelligence and their emails used to prosecute a human rights activist on trumped-up charges.

Protect yourself (and your sources)

These basic steps can prevent your data being similarly hacked and misused.

Two-step verification on Gmail.
Screenshot/google.com

1) Add two-step verification to your email. For Gmail, simply select this option under preferences under security, and then when you log in from a new computer it will ask you to enter a code texted to your phone. You can also download a list of ten backup codes to use when you are away from cell coverage.

2) Encrypt your computer. Or, more realistically, encrypt one folder on it, which is where you will store those backup codes and other secure information. Beware that encryption will slow older computers down. Also encrypt all data every time you do a backup, and set up two-step verification on backups.

Pixabay

3) Put away your phone. You can now record long interviews on most phones. But if you at all suspect that the content of that interview could be misused in any way, by anyone, and particularly by armed actors, use a small digital recorder instead.

4) Get away from your phone. Simply turning off your phone is not enough; hackers can still record ambient conversations. A safer bet is to keep the phone outside of the room. (Remember to also take along another timepiece if you usually depend on your phone for that.)

5) Destroy the evidence. When your write field notes by hand, snap a photo of them and save the images behind encryption, then destroy your hard paper copy.

Do I sound paranoid? Most researchers, after all, are hardly embarking on James Bond-like missions.

Think what you like, but recent revelations have shown that governments around the world have purchased software for listening to conversations in the room through your smart phone.

The community organisers, political activists, rogue scientists, indigenous rights defenders and environmentalists we routinely talk to as part of our research can become targets of government retaliation.

Given the high levels of surveillance and the growing weaponisation of research, caution is warranted. What it means to do ethical research has changed, and that should be reflected in both our own research methods and our methods classes.

Sara Koopman, Research Associate, Tampere Peace Research Institute, University of Tampere

This article was originally published on The Conversation. Read the original article.

 

 

The Latest on: Surveillance

via Google News and Bing News

Top Scientists Call For Improved Incentives to Ensure Research Integrity

Credit: Petr Kratochvil/Public Domain

Credit: Petr Kratochvil/Public Domain

Scientific controversies, from problems replicating results — such as with the now debunked association between autism and MMR vaccines — to researcher misconduct and sensationalism, have led to speculation of “trouble at the lab” as the Economist put it.

To address the issue, the National Academy of Sciences (NAS) and the Annenberg Retreat at Sunnylands recently convened top scientists from Carnegie Mellon University, the University of California, Massachusetts Institute of Technology, Georgia Institute of Technology and other leading institutions to examine ways to return to high scientific standards. In an opinion piece published in Science, the group outlines what can be done to better ensure research integrity.

Attempting to do so begins with acknowledging and addressing the problems that exist at every level, from the notion that science is self-correcting to academia’s incentive structures that encourage researchers to publish novel, positive results, to the greater opportunities open-access and other platforms provide to publish less-scrutinized studies. In addition, a lack of data sharing leads to the inability to replicate results, universities that want to make headlines exaggerate findings, and the media’s quest for ratings and readership often trumps quality reporting.

“Science is littered with irreproducible results, even from top places, and it’s a widespread problem that looks different in different domains, but there are shared commonalities,” said CMU’s Stephen E. Fienberg (pictured right), the Maurice Faulk University Professor of Statistics and Social Sciences. “As a statistician, I understand how the role of data is critical. But determining how to set a policy to support data access is very complicated — there is not a simple set of rules.”

The NAS and Annenberg group identified several ways to change incentives for quality and correction, including rewarding researchers for publishing high-quality work rather than publishing work more often; mentoring young peer-reviewers to increase clarity and quality of editorial responses during the journal publishing process; and using “voluntary withdrawal” and “withdrawal for cause” instead of the blanket “retraction” term, which has negative connotations that can prevent some researchers from taking action when a paper is wrong, but not as a result of fraud or misconduct.

“We all have a responsibility if we want science to work — academic institutions, scientific associations, journals, authors, university public relations officers and the press — people need to be trained all the way up the line.”
Stephen Fienberg

Because ensuring scientific integrity is the responsibility of many stakeholders, the group recommends that the National Academy of Sciences’ call for an independent Scientific Integrity Advisory Board in 1992 should be revisited. The board’s goal would be to address ethical issues in research conduct.

Additionally, universities should insist that their faculty and students are educated in research ethics; that their publications do not feature honorary or ghost authors; that public information officers avoid hype in publicizing findings; and suspect research is promptly and thoroughly investigated.

Read more: TOP SCIENTISTS CALL FOR IMPROVED INCENTIVES TO ENSURE RESEARCH INTEGRITY

 

The Latest on: Research Integrity

via  Bing News

 

Researchers analyse 15m scientific articles to design the most comprehensive ‘world map of research’ yet

In the image, the ‘mapa de la investigación’ mundial elaborado por los autores.

Scientists from the University of Granada and the Spanish National Research Council—members of the SCImago research group—have found that, worldwide, there are three major ‘clusters’ of countries, defined by the thematic areas they investigate and that their governments invest in most.

The study, published in Plos One, analysed the scientific production of more than 80 countries over more than 10 years (1996-2006)

They conclude that the first cluster is made up of Western Europe, together with the USA, Canada and the petrol-rich Arab Emirates. Together, they form the Biomedical cluster, which is characterized by its democratic regimes. The governments of these countries understand that research into health has electoral benefits because it improves the quality of life of their citizens, says Victor Herrero-Solana, Professor of Information and Communication at the University of Granada and one of the authors.

Read more . . .

 

The Latest on: Worldwide research

via  Bing News

 

 

New Truths That Only One Can See

3931579280_ff939d369d_n

Why Most Published Research Findings Are False (Photo credit: dullhunk)

Since 1955, The Journal of Irreproducible Results has offered “spoofs, parodies, whimsies, burlesques, lampoons and satires” about life in the laboratory.

Among its greatest hits: “Acoustic Oscillations in Jell-O, With and Without Fruit, Subjected to Varying Levels of Stress” and “Utilizing Infinite Loops to Compute an Approximate Value of Infinity.” The good-natured jibes are a backhanded celebration of science. What really goes on in the lab is, by implication, of a loftier, more serious nature.

It has been jarring to learn in recent years that a reproducible result may actually be the rarest of birds. Replication, the ability of another lab to reproduce a finding, is the gold standard of science, reassurance that you have discovered something true. But that is getting harder all the time. With the most accessible truths already discovered, what remains are often subtle effects, some so delicate that they can be conjured up only under ideal circumstances, using highly specialized techniques.

Fears that this is resulting in some questionable findings began to emerge in 2005, when Dr. John P. A. Ioannidis, a kind of meta-scientist who researches research, wrote a paper pointedly titled “Why Most Published Research Findings Are False.”

Given the desire for ambitious scientists to break from the pack with a striking new finding, Dr. Ioannidis reasoned, many hypotheses already start with a high chance of being wrong. Otherwise proving them right would not be so difficult and surprising — and supportive of a scientist’s career. Taking into account the human tendency to see what we want to see, unconscious bias is inevitable. Without any ill intent, a scientist may be nudged toward interpreting the data so it supports the hypothesis, even if just barely.

The effect is amplified by competition for a shrinking pool of grant money and also by the design of so many experiments — with small sample sizes (cells in a lab dish or people in an epidemiological pool) and weak standards for what passes as statistically significant. That makes it all the easier to fool oneself.

Paradoxically the hottest fields, with the most people pursuing the same questions, are most prone to error, Dr. Ioannidis argued. If one of five competing labs is alone in finding an effect, that result is the one likely to be published. But there is a four in five chance that it is wrong. Papers reporting negative conclusions are more easily ignored.

Putting all of this together, Dr. Ioannidis devised a mathematical model supporting the conclusion that most published findings are probably incorrect.

Read more . . .

The Latest on: Research results

via  Bing News