Sci-Hub: The Fix Isn’t In

Library Babel Fish - via insidehighered.com

Library Babel Fish – via insidehighered.com

A college librarian’s take on technology – Barbara Fister

By now, you’ve probably heard of Sci-Hub, a collection of millions of articles being gathered through borrowed or stolen library logins, then loaded onto servers abroad for anyone to download. The woman who started it has been called a modern-day Robin Hood. Also, a criminal. There has also been heated debate about why librarians aren’t doing more to back publishers in this fight. After all, these thieves are taking advantage of licensed scholarship that costs libraries billions of dollars annually! Surely we want to stop this rampant theft!

To which I’m tempted to respond “when did we sign on to become your guards, and when do we get a check for this labor?” Because it is labor – lots of labor – to maintain link resolvers, keep license agreements in order, and deal with constant changes in subscription contents. We have to work a lot harder to be publishers’ border guards than people realize.

While this runs completely counter to our values, we really have no choice but to provide this free labor. Libraries run the risk of having their institutional access cut off if a publisher finds their institutional IP address is involved in a massive download. More troubling is the fact that stolen or leaked credentials can seriously compromise both network and individual security.

In other words, whatever you think of Sci-Hub, it’s not the fix for the mess we’re in.

So why are scholars using Sci-Hub? I can think of a couple of reasons: they may have no other access to these articles, not being affiliated with a well-funded first world library (the Robin Hood version) or because getting access is a hassle and hey, I need those articles! Give me those articles right now! For many folks, Sci-Hub is simply a more convenient library that doesn’t make you mess around with logins and interlibrary loans. Hey, we’re busy. Paywalls are a pain.    

Librarians are in a nasty spot. Sometimes I wonder if we can even call ourselves librarians anymore. We feel we are virtually required to provide access to whatever researchers in our local community ask for while restricting access from anyone outside that narrowly-defined community of users. Instead of curators, we’re personal shoppers who moonlight as border guards. This isn’t working out well for anyone.

Unaffiliated researchers have to find illegal work-arounds, and faculty who actually have access through libraries are turning to the black market for articles because it seems more efficient than contacting their personal shopper, particularly when the library itself doesn’t figure in their work flow. In the meantime, all that money we spend on big bundles of articles (or on purchasing access to articles one at a time when we can’t afford the bundle anymore) is just a really high annual rent. We can’t preserve what we don’t own, and we don’t curate because our function is to get what is asked for.

This is how to dig a grave for libraries as a common good.

I’ve been thinking about this while also mulling over Scott McClemee’s take on the Ithaka study of how much it costs for a university press to publish a book. TL;DR – it costs a lot, not including the work of doing the research and writing it up. Though there’s no profit margin like that of the big commercial journal publishers to factor in, those costs are built into the price of books because . . . well, publishers are supposed to make money, not lose it. So prices are high because each book is handcrafted artisanal scholarship. A few hundred copies of the average scholarly book enters the marketplace of ideas, where they might connect with an audience – or not. Libraries aren’t a major market anymore because we’re spending all of our money on articles. Yet for some reason we still insist on book-shaped badges for tenure and measure the worth of scholars in many fields by their publisher’s brand.

This is how to dig a grave for knowledge as a common good. Spend a lot of money writing and publishing books that few people will have a chance to read.

It seems to me both issues raise some big questions we have to answer:

Learn more: The Fix Isn’t In

 

 

The Latest on: Sci-Hub

via  Bing News

 

Broad, MIT scientists overcome key CRISPR-Cas9 genome editing hurdle

Slaymaker and Gao et al. used structural knowledge of Cas9 to guide engineering of a highly specific genome editing tool. CREDIT Ian Slaymaker, Broad Institute

Slaymaker and Gao et al. used structural knowledge of Cas9 to guide engineering of a highly specific genome editing tool.
CREDIT
Ian Slaymaker, Broad Institute

Researchers at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT have engineered changes to the revolutionary CRISPR-Cas9 genome editing system that significantly cut down on “off-target” editing errors. The refined technique addresses one of the major technical issues in the use of genome editing.

The CRISPR-Cas9 system works by making a precisely targeted modification in a cell’s DNA. The protein Cas9 alters the DNA at a location that is specified by a short RNA whose sequence matches that of the target site. While Cas9 is known to be highly efficient at cutting its target site, a major drawback of the system has been that, once inside a cell, it can bind to and cut additional sites that are not targeted. This has the potential to produce undesired edits that can alter gene expression or knock a gene out entirely, which might lead to the development of cancer or other problems. In a paper published today in Science, Feng Zhang and his colleagues report that changing three of the approximately 1,400 amino acids that make up the Cas9 enzyme from S. pyogenes dramatically reduced “off-target editing” to undetectable levels in the specific cases examined.

Zhang and his colleagues used knowledge about the structure of the Cas9 protein to decrease off-target cutting. DNA, which is negatively charged, binds to a groove in the Cas9 protein that is positively charged. Knowing the structure, the scientists were able to predict that replacing some of the positively charged amino acids with neutral ones would decrease the binding of “off target” sequences much more than “on target” sequences.

After experimenting with various possible changes, Zhang’s team found that mutations in three amino acids dramatically reduced “off-target” cuts. For the guide RNAs tested, “off-target” cutting was so low as to be undetectable.

The newly-engineered enzyme, which the team calls “enhanced” S. pyogenes Cas9, or eSpCas9, will be useful for genome editing applications that require a high level of specificity. The Zhang lab is immediately making the eSpCas9 enzyme available for researchers worldwide. The team believes the same charge-changing approach will work with other recently described RNA-guided DNA targeting enzymes, including Cpf1, C2C1, and C2C3, which Zhang and his collaborators reported on earlier this year.

The prospect of rapid and efficient genome editing raises many ethical and societal concerns, says Zhang, who is speaking this morning at the International Summit on Gene Editing in Washington, DC. “Many of the safety concerns are related to off-target effects,” he said. “We hope the development of eSpCas9 will help address some of those concerns, but we certainly don’t see this as a magic bullet. The field is advancing at a rapid pace, and there is still a lot to learn before we can consider applying this technology for clinical use.”

Read more: Broad, MIT scientists overcome key CRISPR-Cas9 genome editing hurdle

 

 

The Latest on: CRISPR-Cas9 genome editing

via  Bing News

 

eLearning as good as traditional training for health professionals

via sarahlouq.co.uk

via sarahlouq.co.uk

Electronic learning could enable millions more students to train as doctors and nurses worldwide, according to research.

A review commissioned by the World Health Organisation (WHO) and carried out by Imperial College London researchers concludes that eLearning is likely to be as effective as traditional methods for training health professionals.

eLearning, the use of electronic media and devices in education, is already used by some universities to support traditional campus-based teaching or enable distance learning.

Wider use of eLearning might help to address the need to train more health workers across the globe. According to a recent WHO report, the world is short of 7.2 million healthcare professionals, and the figure is growing.

The Imperial team, led by Dr Josip Car, carried out a systematic review of the scientific literature to evaluate the effectiveness of eLearning for undergraduate health professional education.

They conducted separate analyses looking at online learning, requiring an internet connection, and offline learning, delivered using CD-ROMs or USB sticks, for example.

The findings, drawn from a total of 108 studies, showed that students acquire knowledge and skills through online and offline eLearning as well as or better than they do through traditional teaching.

Read more: eLearning as good as traditional training for health professional

 

The Latest on: eLearning

via  Bing News

 

Poker-playing program knows when to fold ’em

via www.sciencenews.org

via www.sciencenews.org

UAlberta researchers solve heads-up limit Texas hold ‘em poker.

In a world first, researchers in the Computer Poker Research Group at the University of Alberta have essentially solved heads-up limit Texas hold ‘em poker with their program, called Cepheus.

“Poker has been a challenge problem for artificial intelligence going back over 40 years, and until now, heads-up limit Texas hold ‘em poker was unsolved,” says Michael Bowling, lead author and professor in the Faculty of Science, whose findings were published Jan. 9 in the journal Science.

For more than a half-century, games have been test beds for new ideas in artificial intelligence. The resulting successes have marked significant milestones, from IBM’s Deep Blue defeating world champion Garry Kasparov in chess and Watson beating top-earning Jeopardy! champs Ken Jennings and Brad Rutter.

But as Bowling points out, defeating top human players is not the same as actually solving a game—especially a game like poker.

The challenge of imperfect information

In poker, players have imperfect information—they don’t have full knowledge of past events, and they can’t see their opponents’ hands. The most popular variant of poker today is Texas hold ‘em. When it is played with just two players and with fixed bet sizes and a limited number of raises allowed, it is called heads-up limit hold ‘em.

The possible situations in this poker version are fewer than in checkers—which U of A computing science researchers solved in 2007, led by now dean of science Jonathan Schaeffer—but the imperfect-information nature of heads-up limit hold ‘em makes it a far more challenging game for computers to play or solve.

“We define a game to be essentially solved if a lifetime of play is unable to statistically differentiate it from being solved at 95% confidence,” explains Bowling. “Imagine someone playing 200 hands of poker an hour for 12 hours a day without missing a day for 70 years. Furthermore, imagine them employing the worst-case, maximally exploitive opponent strategy—and never making a mistake. They still cannot be certain they are actually winning.”

Read more: Poker-playing program knows when to fold ’em

 

The Latest on: Artificial intelligence

via  Bing News

 

Knowledge for earnings’ sake

Aluminum-Teacher-Sign-K-2633

Good teachers have a surprisingly big impact on their pupils’ future income

THERE are few policy questions to which improving the quality of education is not a reasonable answer. Yet assessing teachers is far from straightforward. Pupils’ grades or test scores may reflect any of a host of influences, not just the standard of instruction. Neither can one take for granted that good teaching, however it is measured, will translate into better lives for its recipients. In two new working papers , Raj Chetty and John Friedman of Harvard University and Jonah Rockoff of Columbia University deploy some statistical wizardry to tease out the value of teaching (see sources below). Good teachers, they find, are worth their weight in gold.

School systems often try to assess the quality of their teachers by measuring “value added”: the effect a given teacher has on pupils’ test scores. Research is divided on whether this makes sense. Critics reckon that pupils with particular backgrounds—from richer families, for instance, or with more attentive parents—wind up in classrooms with better teachers. If so, the teachers who excel in assessments of value added may simply be teaching more privileged pupils. Messrs Chetty, Friedman and Rockoff try first to settle this debate.

To do so, the economists draw on a substantial data-set from a large, urban American school district, which they do not name. It covers 20 years of results and takes in more than 2.5m pupils. The data include teacher assignments and their pupils’ test scores from third to eighth grade (ages eight to 14, roughly speaking). The authors dig into the mountain of numbers to calculate the effect each teacher has on their pupils’ performance, after adjusting for demography and previous test scores. Previous scores, they reckon, do a good job of capturing the various external influences pupils bring to the classroom, from how well-nourished they are to how good their former teachers were. The authors also control for “drift”, or the fact that more recent test scores are better reflections of a teacher’s effectiveness than older ones, perhaps because the quality of an individual’s teaching tends to change a bit over time. In the end they produce a value-added score for every teacher in their sample.

The researchers then go hunting for bias: the possibility that good teacher scores actually reflect the lucky circumstances of their pupils. Using data from income-tax records, the authors show that the characteristics of pupils’ parents, such as family income, do not generally predict how teachers perform. They also use changes in the average quality of a school’s teaching staff from year to year (from the movement of a particularly good teacher to a different school, for instance) to check their findings. When the average quality of teachers in fourth grade falls from one year to the next, for example, the performance of the fourth-graders also drops as expected.

A bias-free measure of teacher quality allows the economists to wring fascinating conclusions from their data-set. They find, for instance, that the quality of teachers varies much more within schools than among them: a typical school has teachers spanning 85% of the spectrum for the school system as a whole. Not only do teachers matter, in other words, but the best teachers are not generally clumped within particular schools.

Read more . . .

 

The Latest on: Good teachers

via Google News and Bing News