Aug 062017

WordNet is a lexical database for the English language. It groups English words into sets of synonyms called synsets, provides short definitions and usage examples, and records a number of relations among these synonym sets or their members. Researchers from The University of Texas at Austin developed a method to incorporate information from WordNet into informational retrieval systems. [Courtesy of Visuwords.]

Researchers combine artificial intelligence, crowdsourcing and supercomputers to develop better, and more reasoned, information extraction and classification methods

How do search engines generate lists of relevant links?

The outcome is the result of two powerful forces in the evolution of information retrieval: artificial intelligence — especially natural language processing — and crowdsourcing.

Computer algorithms interpret the relationship between the words we type and the vast number of possible web pages based on the frequency of linguistic connections in the billions of texts on which the system has been trained.

Matt Lease, Associate Professor, School of Information, University of Texas at Austin

But that is not the only source of information. The semantic relationships get strengthened by professional annotators who hand-tune results — and the algorithms that generate them — for topics of importance, and by web searchers (us) who, in our clicks, tell the algorithms which connections are the best ones.Despite the incredible, world-changing success of this model, it has its flaws. Search engine results are often not as “smart” as we’d like them to be, lacking a true understanding of language and human logic. Beyond that, they sometimes replicate and deepen the biases embedded in our searches, rather than bringing us new information or insight.

Matthew Lease, an associate professor in the School of Information at The University of Texas at Austin (UT Austin), believes there may be better ways to harness the dual power of computers and human minds to create more intelligent information retrieval (IR) systems.

Combining AI with the insights of annotators and the information encoded in domain-specific resources, he and his collaborators are developing new approaches to IR that will benefit general search engines, as well as niche ones like those for medical knowledge or non-English texts.

This week, at the 2017 Annual Meeting of the Association for Computational Linguistics in Vancouver, Canada, Lease and collaborators from UT Austin and Northeastern University presented two papers describing their novel IR systems. Their research leverages the supercomputing resources at the Texas Advanced Computing Center, one of the leading supercomputing research centers in the world.


In one paper, led by Ph.D. student An Nguyen, they presented a method that combines input from multiple annotators to determine the best overall annotation for a given text. They applied this method to two problems: analyzing free-text research articles describing medical studies to extract details of each study (e.g., the condition, patient demographics, treatments, and outcomes), and recognizing named-entities — analyzing breaking news stories to identify the events, people, and places involved.

“An important challenge in natural language processing is accurately finding important information contained in free-text, which lets us extract it into databases and combine it with other data in order to make more intelligent decisions and new discoveries,” Lease said. “We’ve been using crowdsourcing to annotate medical and news articles at scale so that our intelligent systems will be able to more accurately find the key information contained in each article.”

Such annotation has traditionally been performed by in-house, domain experts. However, more recently crowdsourcing has become a popular means to acquire large labeled datasets at lower cost. Predictably, annotations from laypeople are of lower quality than those from domain experts, so it is necessary to estimate the reliability of crowd annotators and also aggregate individual annotations to come up with a single set of “reference standard” consensus labels.

Lease’s team found that their method was able to train a neural network — a form of artificial intelligence (AI) modeled on the human brain — so it could very accurately predict named entities and extract relevant information in unannotated texts. The new method improved upon existing tagging and training methods. The method also provides an estimate of each worker’s label quality, which can be transferred between tasks and is useful for error analysis and intelligently routing tasks — identifying the best person to annotate each particular text.


The group’s second paper, led by Ph.D. student Ye Zhang, addressed the fact that neural models for natural language processing (NLP) often ignore existing resources like WordNet — a lexical database for the English language that groups words into sets of synonyms — or domain-specific ontologies, such as the Unified Medical Language System, which encode knowledge about a given field.

An example of grouped partial weight sharing, here with two groups. Lease’s team stochastically selects embedding weights to be shared between words belonging to the same groups. Weight sharing constrains the number of free parameters that a system must learn, increases the efficiency and accuracy of the neural model, and serves as a flexible way to incorporate prior knowledge, combining the best of human knowledge with machine learning. [Courtesy: Ye Zhang, Matthew Lease, UT Austin; Byron C. Wallace, Northeastern University]

They proposed a method for exploiting these existing linguistic resources via weight sharing to improve NLP models for automatic text classification. For example, their model learns to classify whether or not published medical articles describing clinical trials are relevant to a well-specified clinical question.

In weight sharing, words that are similar share some fraction of a weight, or assigned numerical value. Weight sharing constrains the number of free parameters that a system must learn, thereby increasing the efficiency and accuracy of the neural model, and serving as a flexible way to incorporate prior knowledge. In doing so, they combine the best of human knowledge with machine learning.

“Neural network models have tons of parameters and need lots of data to fit them,” said Lease. “We had this idea that if you could somehow reason about some words being related to other words a priori, then instead of having to have a parameter for each one of those word separately, you could tie together the parameters across multiple words and in that way need less data to learn the model. It would realize the benefits of deep learning without large data constraints.”

They applied a form of weight sharing to a sentiment analysis of movie reviews and to a biomedical search related to anemia. Their approach consistently yielded improved performance on classification tasks compared to strategies that did not exploit weight sharing.

“This provides a general framework for codifying and exploiting domain knowledge in data-driven neural network models,” say Byron Wallace, Lease’s collaborator from Northeastern University. (Wallace was formerly also a faculty member at UT Austin and became a frequent user of TACC as well.)

Lease, Wallace and their collaborators used the GPUs (graphics processing units) on the Maverick supercomputer at TACC to enable their analyses and train the machine learning system.

“Training neural computing models for big data takes a lot of compute time,” Lease said. “That’s where TACC fits in as a wonderful resource, not only because of the great storage that’s available, but also the large number of nodes and the high processing speeds available for training neural models.”

In addition to GPUs, TACC deploys cutting-edge processing architectures developed by Intel to which the machine learning libraries are playing catch up, according to Lease.

“Training neural computing models for big data takes a lot of compute time. That’s where TACC fits in as a wonderful resource, not only because of the great storage that’s available, but also the large number of nodes and the high processing speeds available for training neural models.”

Matt Lease, The University of Texas at Austin

“Though many deep learning libraries have been highly optimized for processing on GPUs, there’s reason to think that these other architectures will be faster in the long term once they’ve been optimized as well,” he said.

“With the introduction of Stampede2 and its many core infrastructure, we are glad to see more optimization of CPU-based machine learning frameworks,” said Niall Gaffney, Director of Data Intensive Computing at TACC. “Project like Matt’s show the power of machine learning in both measured and simulated data analysis.”

Gaffney says that in TACC’s initial work with Caffe — a deep learning framework developed at the University of California, Berkeley, which has been optimized by Intel for Xeon Phi processors — they are finding that CPUs have roughly the equivalent performance for many AI jobs at GPUs.

“This can be transformative as it allows us to offer more nodes that can satisfy these researchers as well as allowing HPC users to leverage AI in their analysis phases, without having to move to a different GPU enabled system,” he said.

By improving core natural language processing technologies for automatic information extraction and the classification of texts, web search engines built on these technologies can continue to improve.

Lease has received grants from the National Science Foundation (NSF), the Institute of Museum and Library Services (IMLS) and the Defense Advanced Research Projects Agency (DARPA) to improve the quality of crowdsourcing across a variety of tasks, scales, and settings. He says that though commercial web search companies invest a lot of resources to develop practical, effective solutions, the demands of industry lead them to focus on problems with commercial application and short-term solutions.

“Industry is great at looking at near-term things, but they don’t have the same freedom as academic researchers to pursue research ideas that are higher risk but could be more transformative in the long-term,” Lease said. “This is where we benefit from public investment for powering discoveries. Resources like TACC are incredibly empowering for researchers in enabling us to pursue high-risk, potentially transformative research.”



The Latest on: Search engines
  • Some hero made a Netflix search engine to help you keep track of what’s coming and going
    on January 20, 2018 at 4:54 pm

    Struggling to keep up with all the new TV shows and movies added to Netflix on a daily basis? Missed out on Clue ’cause you didn’t know it was leaving the streaming platform so soon? While there are some websites that compile such data, perhaps none do ... […]

  • On Pornhub you can search anything: Politicians, pop stars, even fidget spinners
    on January 20, 2018 at 4:35 pm

    However, the lesson from the celebrity searches seems pretty straightforward. People are using Pornhub like it is any old search engine. Pornhub has been publishing roundups like these for the past five years. Last year, the company learned that right ... […]

  • Why And How Semantic Search Transformed SEO For The Better
    on January 20, 2018 at 4:31 pm

    Today, I’ll turn my attention to semantic search, a search engine improvement that came along in 2013 in the form of the Hummingbird update. At the time, it sent the SERPs into a somewhat chaotic frenzy of changes, but introduced semantic search ... […]

  • The best job search sites
    on January 20, 2018 at 7:36 am

    It’s a dog-eat-dog world out there, so touch up those resumes and start clicking. Indeed is the most-trafficked job search engine on the planet. With more than a billion job searches a month and hundreds of thousands of new postings each week ... […]

  • Search in Pics: A pig visits Google, big teddy bear & a broken Google sign
    on January 20, 2018 at 12:33 am

    In this week’s Search In Pictures, here are the latest images culled from the web, showing what people eat at the search engine companies, how they play, who they meet, where they speak, what toys they have and more. […]

  • China's Baidu Search Engine Engages In Making HD Maps For Autonomous Cars
    on January 19, 2018 at 3:21 am

    China's largest search company, Baidu, believes that, in the long term, HD maps will represent a "much bigger business" than their current online commercial activities. This, according to Technology Review, comes straight from Baidu's Chief Operating ... […]

  • A Giphy GIF search engine is coming to Instagram Stories
    on January 18, 2018 at 9:08 am

    Beta tests and updates are coming to Instagram Stories faster than you can say “Hey, this is becoming a really bloated app!” Instagram looks to be forming a partnership with Giphy, as users have been spotting a GIF library popping up in their Stories. […]

  • Google to Use Page Speed as Ranking Signal in Mobile Search
    on January 17, 2018 at 12:00 am

    Lead News Writer at Search Engine Journal Matt Southern has been the lead news writer at Search Engine Journal since 2013. His passion for helping people in ... [Read full bio […]

  • A search engine that digs deeper than Google
    on January 16, 2018 at 6:52 am

    Pipl has raised $19 million from IGP. Founder and CEO Matthew Hertz tells "Globes" about the search engine's ability to find people. On November 15, 2016, the Detroit Police Department was notified that Savannah Rayford, an 11 month-old baby suffering from ... […]

  • Search engines of 2018
    on January 15, 2018 at 1:57 am

    Search engines are software’s that are developed in order to extract information from the World Wide Web. There are many search engines but the most popular one is Google. Whenever we have to search for anything on the internet, we also think about ... […]

via Google News and Bing News

  No Responses to “The future of search engines”

    Leave a Reply

    %d bloggers like this: