It is time to use big data to tackle longstanding questions about plant diversity and forecast how plant life will fare

Data from millions of museum specimens, such as this Ziziphus celata or Florida jujube, are now available to scientists around the world via digital databases such as iDigBio. Florida Museum photo by Jeff Gage

A group of Florida Museum of Natural History scientists has issued a “call to action” to use big data to tackle longstanding questions about plant diversity and evolution and forecast how plant life will fare on an increasingly human-dominated planet.

In a commentary published today in Nature Plants, the scientists urged their colleagues to take advantage of massive, open-access data resources in their research and help grow these resources by filling in remaining data gaps.

“Using big data to address major biodiversity issues at the global scale has enormous practical implications, ranging from conservation efforts to predicting and buffering the impacts of climate change,” said study author Doug Soltis, a Florida Museum curator and distinguished professor in the University of Florida department of biology. “The links between big data resources we see now were unimaginable just a decade ago. The time is ripe to leverage these tools and applications, not just for plants but for all groups of organisms.”

Over several centuries, natural history museums have built collections of billions of specimens and their associated data, much of which is now available online. New technologies such as remote sensors and drones allow scientists to monitor plants and animals and transmit data in real time. And citizen scientists are contributing biological data by recording and reporting their observations via digital tools such as iNaturalist.

Together, these data resources provide scientists and conservationists with a wealth of information about the past, present and future of life on Earth. As these databases have grown, so have the computational tools needed not only to analyze but also link immense data sets.

Studies that previously focused on a handful of species or a single plant community can now expand to a global level, thanks to the development of databases such as GenBank, which stores DNA sequences, iDigBio, a University of Florida-led effort to digitize U.S. natural history collections, and the Global Biodiversity Information Facility, a repository of species’ location information.

These resources can be valuable to a wide range of users, from scientists in pursuit of fundamental insights into plant evolution and ecology to land managers and policymakers looking to identify the regions most in need of conservation, said Julie Allen, co-lead author and an assistant professor in the University of Nevada-Reno department of biology.

If Earth’s plant life were a medical patient, small-scale studies might examine the plant equivalent of a cold sore or an ingrown toenail. With big data, scientists can gain a clearer understanding of global plant health as a whole, make timely diagnoses and prescribe the right treatment plans.

Such plans are urgently needed, Allen said.

“We’re in this exciting and terrifying time in which the unprecedented amount of data available to us intersects with global threats to biodiversity such as habitat loss and climate change,” said Allen, a former Florida Museum postdoctoral researcher and UF doctoral graduate. “Understanding the processes that have shaped our world – how plants are doing, where they are now and why – can help us get a handle on how they might respond to future changes.”

Why is it so vital to track these regional and global changes?

“We can’t survive without plants,” said co-lead author and museum research associate Ryan Folk. “A lot of groups evolved in the shadow of flowering plants. As these plants spread and diversified, so did ants, beetles, ferns and other organisms. They are the base layer to the diversity of life we see on the planet today.”

In addition to using and growing plant data resources, the authors hope the scientific community will address one of the toughest remaining obstacles to using biological big data: getting databases to work smoothly with each other.

“This is still a huge limitation,” Allen said. “The data in each system are often collected in completely different ways. Integrating these to connect in seamless ways is a major challenge.”

Learn more: SCIENTISTS: ‘TIME IS RIPE’ TO USE BIG DATA FOR PLANET-SIZED PLANT QUESTIONS

 

 

The Latest on: Big data to address major biodiversity issues at the global scale

via Google News

 

The Latest on: Big data to address major biodiversity issues at the global scale

via  Bing News

 

Massive open-access database on human cultures created

via www.shh.mpg.de

via www.shh.mpg.de

An international team of researchers has developed a website at d-place.org to help answer long-standing questions about the forces that shaped human cultural diversity.

D-PLACE – the Database of Places, Language, Culture and Environment – is an expandable, open access database that brings together a dispersed body of information on the language, geography, culture and environment of more than 1,400 human societies. It comprises information mainly on pre-industrial societies that were described by ethnographers in the 19th and early 20th centuries.

The team’s paper on D-PLACE is published today in the journal PLOS ONE.

“Human cultural diversity is expressed in numerous ways: from the foods we eat and the houses we build, to our religious practices and political organisation, to who we marry and the types of games we teach our children,” said Kathryn Kirby, a postdoctoral fellow in the Departments of Ecology & Evolutionary Biology and Geography at the University of Toronto and lead author of the study. “Cultural practices vary across space and time, but the factors and processes that drive cultural change and shape patterns of diversity remain largely unknown.

“D-PLACE will enable a whole new generation of scholars to answer these long-standing questions about the forces that have shaped human cultural diversity.”

Co-author Fiona Jordan, senior lecturer in anthropology at the University of Bristol and one of the project leads said, “Comparative research is critical for understanding the processes behind cultural diversity. Over a century of anthropological research around the globe has given us a rich resource for understanding the diversity of humanity – but bringing different resources and datasets together has been a huge challenge in the past.

“We’ve drawn on the emerging big data sets from ecology, and combined these with cultural and linguistic data so researchers can visualise diversity at a glance, and download data to analyse in their own projects.”

D-PLACE allows users to search by cultural practice (e.g., monogamy vs. polygamy), environmental variable (e.g. elevation, mean annual temperature), language family (e.g. Indo-European, Austronesian), or region (e.g. Siberia). The search results can be displayed on a map, a language tree or in a table, and can also be downloaded for further analysis.

It aims to enable researchers to investigate the extent to which patterns in cultural diversity are shaped by different forces, including shared history, demographics, migration/diffusion, cultural innovations, and environmental and ecological conditions.

D-PLACE was developed by an international team of scientists interested in cross-cultural research. It includes researchers from Max Planck Institute for the Science of Human history in Jena Germany, University of Auckland, Colorado State University, University of Toronto, University of Bristol, Yale, Human Relations Area Files, Washington University in Saint Louis, University of Michigan, American Museum of Natural History, and City University of New York.

The diverse team included: linguists; anthropologists; biogeographers; data scientists; ethnobiologists; and evolutionary ecologists, who employ a variety of research methods including field-based primary data collection; compilation of cross-cultural data sources; and analyses of existing cross-cultural datasets.

“The team’s diversity is reflected in D-PLACE, which is designed to appeal to a broad user base,” said Kirby. “Envisioned users range from members of the public world-wide interested in comparing their cultural practices with those of other groups, to cross-cultural researchers interested in pushing the boundaries of existing research into the drivers of cultural change.”

Learn more: Massive open-access database on human cultures created

 

 

The Latest on: Open-access database

via Google News

 

The Latest on: Open-access database

via  Bing News

 

Patents: The Next Open Access Fight

via www.eff.org

via www.eff.org

When Universities Sell Patents to Trolls, Publicly Funded Research Is Compromised

There’s been a lot of talk lately about the state of publicly funded research. Many, including EFF, have long called on Congress to pass a law requiring that publicly funded research be made available to the public.

With strong support for FASTR (the Fair Access to Science and Technology Research Act) in both parties, Vice-President Biden making open access a major component of his Cancer Moonshot initiative, and presumptive presidential nominee Hillary Clinton including access to research in her platform, signs are looking good that Congress will finally pass an open access mandate. It’s just a matter of when.

Even if we pass an open access law this year, though, there’s still a major obstacle in the way of publicly funded research fully benefiting the public: patent trolls.

Universities and Patent Trolls: A Twisted Romance

Wait, patent trolls? Those obscure companies that just amass patents and sue people instead of actually making or selling anything? What do they have to do with publicly funded research? Quite a lot, it turns out.

Research universities represent one of the primary recipients of federal government funding for science. Many of those universities routinely file patents on technologies they develop, and unfortunately, many of those patents end up in the hands of trolls. There are dozens of universities, both public and private, with standing agreements to sell patents to patent assertion entities. When patent trolls’ intentions are so often at odds with the mission of research benefiting the world, it’s worth asking: why do universities sell to them?

A recent Planet Money episode explored a company that’s sued nearly every workout supplement manufacturer in the U.S. over a patent on an amino acid that occurs in nature, a patent that originated at Stanford University.

And just last month, we gave our Stupid Patent of the Month award to My Health, a company that appears to do very little besides file patent and trademark infringement lawsuits. Like the arginine patent, My Health’s patent originated at a university, the University of Rochester.

We don’t even know how many university patents trolls control. That’s because a lot of the time, the university is still listed as the owner of the patent, but it gives the troll a broad, exclusive license to litigate it.

Keep in mind that the federal government funds a lot of that research. Even as we move toward a time when most publicly funded research is publicly available, patent trolls make it more difficult for practicing companies to use that knowledge (subscription required, ironically).

Even for research that’s not federally funded, the public has still invested in it in the form of grants, donations, state funding, and tuition fees. If you’re in college right now—or if you’re still paying off your loans—how would you feel found out that patent trolls are using that money to bully innovators into paying licensing fees?

Bad for Both Innovation and the Bottom Line

Universities filing patents for federally funded research is a relatively new phenomenon. Thanks to a law enacted in 1980, commonly known as the Bayh-Dole Act, universities can apply for patents for their inventions even if those inventions were funded by the federal government.

Before Bayh-Dole, the government itself was responsible for patenting federally funded inventions; when it did so, it would let others use them only under nonexclusive licenses.

The years following Bayh-Dole saw a major uptick in patents filed by universities (PDF). In 1980, 394 utility patents were granted to universities. By 2010, that number had increased tenfold (for comparison, the number of patents issued altogether increased fivefold over the same 30 years).

Today, it’s unusual for a research university not to have a technology transfer office, an office whose job it is to file patents and sell or transfer them to third parties. Here’s something else a lot of people don’t know about tech transfer: the vast majority of these programs lose money for their schools.

Some defenders of the technology transfer system say that that’s to be expected: the purpose isn’t to make money; it’s to bring their important inventions to market. But again and again, tech transfer programs seem to undermine their own goals. For every patent that gets licensed to a company that actually intends to carry the university’s work forward, many others either go unlicensed (putting a strain on the university’s resources) or are sold to trolls (putting a strain on practicing companies).

Is the purpose of a tech transfer program to make money for the university or is it to stimulate innovation? Either way, many aren’t doing a very good job.

Can Tech Transfer Fix Itself?

Several universities have admitted that selling or licensing patents to trolls is a big problem. The Association of University Technology Managers (AUTM) maintains a document called Nine Points to Consider and a list of over 100 institutions that have endorsed it since 2007.

AUTM’s “points” include prioritizing transferring to companies that are committed to active research and development in the patents’ areas of technology, not those that will simply sit on the patents and wait to extract licensing fees from others. The document even explicitly warns of the dangers of transferring to patent-holding companies:

Universities would better serve the public interest by ensuring appropriate use of their technology by requiring their licensees to operate under a business model that encourages commercialization and does not rely primarily on threats of infringement litigation to generate revenue.

We strongly disagreed with AUTM when it lobbied against patent reform and open education policy. But in this case, AUTM is right.

The way for universities to make sure patented inventions actually get used is to partner with companies committed to making advancement in those areas of technology, not those with business models based on litigation. We’d add that before filing a patent at all, a university ought to consider whether a patent will support the goal of bringing that particular invention to market.

The Nine Points were a big step in the right direction, but many of the universities that signed it have continued to sell patents to companies that do nothing but sue.

Who Is Your University Listening To?

Learn more: Patents: The Next Open Access Fight

 

 

The Latest on: Patents

via Google News

 

The Latest on: Patents

via  Bing News

 

Research publishing: Open science

via blog.geographydirections.com

via blog.geographydirections.com

Old-fashioned ways of reporting new discoveries are holding back medical research. Some scientists are pushing for change

“NEVER tried sharing data like this before,” said the tweet. “Feels like walking into a country for the first time. Exciting, but don’t know what to expect.”

David O’Connor of the University of Wisconsin-Madison was announcing his decision on February 14th to post online data from his laboratory’s latest experiment. He and his team had infected macaques with the Zika virus and were recording the concentrations of virus in the monkeys’ bodily fluids every day. Researchers know that Zika is transmitted principally by infected mosquitoes. But if the virus appears in saliva and urine then these might also be sources of infection.

Dr O’Connor and his colleagues published their results every day to a publicly accessible website. They hoped this would be useful to others working on the disease and, ultimately, to health authorities striving to contain it. They did not expect to garner much attention. But they did.

Within days, researchers from all over the world started contacting them, making suggestions and asking for samples to conduct work that Dr O’Connor’s lab was ill-equipped to carry out. He describes the experience of data-sharing as “universally positive”. But as his tweet suggests, such openness is far from routine.

Careers in medical research hang on publishing papers in prestigious journals such as Nature, Science and Cell. Even in emergencies such as the recent Zika outbreak, or the earlier epidemic of Ebola fever in west Africa, biologists are reluctant to share data until their work is published.

Once a paper is submitted to a journal, though, its findings can languish unseen for months as it goes through a vetting process known as peer review. Reviewers can ask for substantial changes, further experiments and also suggest the journal reject the paper outright. If several journals turn down a paper before it is published, it may be years before the results become public.

Left in the dark in this way, other practitioners may waste time and money conducting unnecessary experiments. In cases where the unpublished work might warn of things like unsafe treatments, the cost of the delay could be measured in lives. Dr O’Connor’s response is part of a reaction against this delay.

Peerless publishing

He is not alone. On February 10th, prompted by concerns that vital data on the Zika epidemic could be held up by journal peer review, scientific academies, research funders and a number of academic publishers urged researchers “to make any information available that might have value in combating the crisis”. The publishers promised that posting a paper online as a so-called preprint would not disqualify it from publication in a journal later.

But not all publishers signed up to the agreement, and it raises many questions. As Stephen Curry of Imperial College London noted in a blog post for the Guardian, a British daily newspaper, if the approach is valid for Zika, then why not for other infectious diseases, including malaria or HIV/AIDS, which kill millions every year?

Learn more: Research publishing: Taking the online medicine

 

 

The Latest on: Open science

via Google News

 

The Latest on: Open science

via  Bing News

 

New gold standard established for open and reproducible research

The Rotherhithe Picture Research Library Credit: Chris Guy

The Rotherhithe Picture Research Library
Credit: Chris Guy

Open access isn’t as open as you think, especially when there are corporate interests involved

Matthew Grosvenor

Cambridge computer scientists have established a new gold standard for open research, in order to make scientific results more robust and reliable

A group of Cambridge computer scientists have set a new gold standard for openness and reproducibility in research by sharing the more than 200GB of data and 20,000 lines of code behind their latest results – an unprecedented degree of openness in a peer-reviewed publication. The researchers hope that this new gold standard will be adopted by other fields, increasing the reliability of research results, especially for work which is publicly funded.

The researchers are presenting their results at a talk today (4 May) at the 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI) in Oakland, California.

In recent years there’s been a great deal of discussion about so-called ‘open access’ publications – the idea that research publications, particularly those funded by public money, should be made publicly available.

Computer science has embraced open access more than many disciplines, with some publishers sub-licensing publications and allowing authors to publish them in open archives. However, as more and more corporations publish their research in academic journals, and as academics find themselves in a ‘publish or perish’ culture, the reliability of research results has come into question.

“Open access isn’t as open as you think, especially when there are corporate interests involved,” said Matthew Grosvenor, a PhD student from the University’s Computer Laboratory, and the paper’s lead author. “Due to commercial sensitivities, corporations are reluctant to make their code and data sets available when they publish in peer-reviewed journals. But without the code or data sets, the results are irrelevant – we can’t know whether an experiment is the same if we try to recreate it.”

Beyond computer science, a number of high-profile incidents of errors, fraud or misconduct have called quality standards in research into question. This has thrown the issue of reproducibility – that a result can be reliably repeated given the same conditions – into the spotlight.

“If a result cannot be reliably repeated, then how can we trust it?” said Grosvenor. “If you try to reproduce other people’s work from the paper alone, you often end up with different numbers. Unless you have access to everything, it’s useless to call a piece of research open source. It’s either open source or it’s not – you can’t open source just a little bit.”

With their most recent publication, Grosvenor and his colleagues have gone several steps beyond typical open access standards – setting a new gold standard for open and reproducible research. All of the experimental figures and tables in the award-winning final version of their paper, which describes a new method of making data centres more efficient, are clickable.

Read more: New gold standard established for open and reproducible research

 

The Latest on: Open research

via Google News

 

The Latest on: Open research

via  Bing News