Feb 262017

Human whipworm via University of Manchester

The human whipworm, which infects 500 million people and can damage physical and mental growth, is killed at egg and adult stage by a new drug class developed at the Universities of Manchester and Oxford and University College London.

Current treatments for human whipworm are based on 1960s drugs initially developed for livestock and have a low success rate in people. There are also no vaccines available.

As a result there’s a desperate need for new treatments. The team from the three UK universities, whose results have been published in the journal PLOS Neglected Tropical Diseases, studied a class of dihydrobenzoxazepinones, not previously associated with controlling whipworms.

The researchers found that the compounds kill the adult stages of the whipworm much more effectively than existing drugs.

Parasite immunologist, Professor Kathryn Else from The University of Manchester said: “Eradicating the whipworm requires more effective drugs, improving hygiene and vaccine development. The compounds we have discovered could address the first two of these.”

Professor Kathryn Else

Although we rarely see whipworm infection in the UK, it is a serious and damaging problem in many parts of the world and if we can develop this treatment, the lives of many people could be improved

Professor Kathryn Else

Whipworm eggs are also affected by the compounds. Whipworm eggs are passed from infected faeces into people by hand to mouth contact. This often happens in unsanitary toilets or areas where people live close together. The eggs are highly resistant to extreme temperature changes and ultraviolet radiation and can remain viable in the environment for many years.

However the new compounds are effective against the eggs and could be developed into a spray which can stop infection at source.

The researchers are now modifying their compounds to make them effective enough for a treatment in humans, and one that can be turned into a product used in the developing countries most affected.

Professor Else said: “This team brought expertise from immunology, medicinal chemistry and neurobiology and really shows how combining across disciplines and institutions can lead to important new discoveries.

“Although we rarely see whipworm infection in the UK, it is a serious and damaging problem in many parts of the world and if we can develop this treatment, the lives of many people could be improved.”

Learn more: Enormous promise for new parasitic infection treatment


Feb 262017


A two-dimensional material developed by Bayreuth physicist Prof. Dr. Axel Enders together with international partners could revolutionize electronics.

Semiconductors that are as thin as an atom are no longer the stuff of science fiction. Bayreuth physicist Prof. Dr. Axel Enders, together with partners in Poland and the US, has developed a two-dimensional material that could revolutionize electronics. Thanks to its semiconductor properties, this material could be much better suited for high tech applications than graphene, the discovery of which in 2004 was celebrated worldwide as a scientific breakthrough. This new material contains carbon, boron, and nitrogen, and its chemical name is “Hexagonal Boron-Carbon-Nitrogen (h-BCN)”. The new development was published in the journal ACS Nano.

“Our findings could be the starting point for a new generation of electronic transistors, circuits, and sensors that are much smaller and more bendable than the electronic elements used to date. They are likely to enable a considerable decrease in power consumption,” Prof. Enders predicts, citing the CMOS technology that currently dominates the electronics industry. This technology has clear limits with regard to further miniaturization. “h-BCN is much better suited than graphene when it comes to pushing these limits,” according to Enders.

Graphene is a two-dimensional lattice made up entirely of carbon atoms. It is thus just as thin as a single atom. Once scientists began investigating these structures more closely, their remarkable properties were greeted with enthusiasm across the world. Graphene is 100 to 300 times stronger than steel and is, at the same time, an excellent conductor of heat and electricity. However, electrons are able to flow through unhindered at any applied voltage such that there is no defined on-position or off-position. “For this reason, graphene is not well suited for most electronic devices. Semiconductors are required, since only they can ensure switchable on and off states,” Prof. Enders explained. He had the idea of replacing individual carbon atoms in graphene with boron and nitrogen, resulting in a two-dimensional grid with the properties of a semiconductor. He has now been able to turn this idea into reality with his team of scientists at the University of Nebraska-Lincoln. Research partners at the University of Cracow, the State University of New York, Boston College, and Tufts University also contributed to this achievement.

Learn more: As Thin as an Atom: a revolutionary semiconductor for electronics


Feb 252017

via check the science

Early this year, about 30 neuroscientists and computer programmers got together to improve their ability to read the human mind.

The hackathon was one of several that researchers from Princeton University and Intel, the largest maker of computer processors, organized to build software that can tell what a person is thinking in real time, while the person is thinking it.

The collaboration between researchers at Princeton and Intel has enabled rapid progress on the ability to decode digital brain data, scanned using functional magnetic resonance imaging (fMRI), to reveal how neural activity gives rise to learning, memory and other cognitive functions. A review of computational advances toward decoding brain scans was published today in the journal Nature Neuroscience, authored by researchers at the Princeton Neuroscience Institute and Princeton’s departments of computer science and electrical engineering, together with colleagues at Intel Labs, a research arm of Intel.

“The capacity to monitor the brain in real time has tremendous potential for improving the diagnosis and treatment of brain disorders as well as for basic research on how the mind works,” said Jonathan Cohen, the Robert Bendheim and Lynn Bendheim Thoman Professor in Neuroscience, co-director of the Princeton Neuroscience Institute, and one of the founding members of the collaboration with Intel.

Since the collaboration’s inception two years ago, the researchers have whittled the time it takes to extract thoughts from brain scans from days down to less than a second, said Cohen, who is also a professor of psychology.

One type of experiment that is benefiting from real-time decoding of thoughts occurred during the hackathon. The study, designed by J. Benjamin Hutchinson, a former postdoctoral researcher in the Princeton Neuroscience Institute who is now an assistant professor at Northeastern University, aimed to explore activity in the brain when a person is paying attention to the environment, versus when his or her attention wanders to other thoughts or memories.

In the experiment, Hutchinson asked a research volunteer — a graduate student lying in the fMRI scanner — to look at a detail-filled picture of people in a crowded café. From his computer in the console room, Hutchinson could tell in real time whether the graduate student was paying attention to the picture or whether her mind was drifting to internal thoughts. Hutchinson could then give the graduate student feedback on how well she was paying attention by making the picture clearer and stronger in color when her mind was focused on the picture, and fading the picture when her attention drifted.

The ongoing collaboration has benefited neuroscientists who want to learn more about the brain and computer scientists who want to design more efficient computer algorithms and processing methods to rapidly sort through large data sets, according to Theodore Willke, a senior principal engineer at Intel Labs in Hillsboro, Oregon, and head of Intel’s Mind’s Eye Lab. Willke directs Intel’s part of the collaborative team.

“Intel was interested in working on emerging applications for high-performance computing, and the collaboration with Princeton provided us with new challenges,” Willke said. “We also hope to export what we learn from studies of human intelligence and cognition to machine learning and artificial intelligence, with the goal of advancing other important objectives, such as safer autonomous driving, quicker drug discovery and ealier detection of cancer.”

Since the invention of fMRI two decades ago, researchers have been improving the ability to sift through the enormous amounts of data in each scan. An fMRI scanner captures signals from changes in blood flow that happen in the brain from moment to moment as we are thinking. But reading from these measurements the actual thoughts a person is having is a challenge, and doing it in real time is even more challenging.

A number of techniques for processing these data have been developed at Princeton and other institutions. For example, work by Peter Ramadge, the Gordon Y.S. Wu Professor of Engineering and professor of electrical engineering at Princeton, has enabled researchers to identify brain activity patterns that correlate to thoughts by combining data from brain scans from multiple people. Designing computerized instructions, or algorithms, to carry out these analyses continues to be a major area of research.

Powerful high-performance computers help cut down the time that it takes to do these analyses by breaking the task up into chunks that can be processed in parallel. The combination of better algorithms and parallel computing is what enabled the collaboration to achieve real-time brain scan processing, according to Kai Li, Princeton’s Paul M. Wythes ’55 P86 and Marcia R. Wythes P86 Professor in Computer Science and one of the founders of the collaboration.

Researchers at Princeton University and Intel Labs have developed software that can interpret people’s thoughts in real time while their brains are being scanned using functional magnetic resonance imaging (fMRI). The goal is to reveal how neural activity corresponds to learning, memory and other brain functions. In the video, Nicholas Turk-Browne, professor of psychology, explains a typical experiment in which the researchers in the control room can monitor the ability of a volunteer, who is lying in the fMRI scanner, to pay attention to figures in a picture of a busy café scene. The experiment was designed by J. Benjamin Hutchinson, a former postdoctoral researcher in the Princeton Neuroscience Institute who is now an assistant professor at Northeastern University. Yida Wang, who earned his doctorate in computer science from Princeton in 2016 and now works at Intel Labs, helped design the software that makes real-time analysis of the fMRI data possible. Also present in the video is graduate student Anne Mennen, who is using the real-time analysis capability to study learning and memory. (Video by Danielle Alio, Office of Communications)

A true collaboration

Since the beginning of the collaboration in 2015, Intel has contributed to Princeton more than $1.5 million in computer hardware and support for Princeton graduate students and postdoctoral researchers. Intel also employs 10 computer scientists who work on this project with Princeton, and these experts work closely with Princeton faculty, students and postdocs to improve the software.

These algorithms locate thoughts within the data by using machine learning, the same technique that facial recognition software uses to help find friends in social media platforms such as Facebook. Machine learning involves exposing computers to enough examples so that the computers can classify new objects that they’ve never seen before.

One of the results of the collaboration has been the creation of a software toolbox, called the Brain Imaging Analysis Kit (BrainIAK), that is openly available via the Internet to any researchers looking to process fMRI data. The team is now working on building a real-time analysis service. “The idea is that even researchers who don’t have access to high-performance computers, or who don’t know how to write software to run their analyses on these computers, would be able to use these tools to decode brain scans in real time,” said Li.

What these scientists learn about the brain may eventually help individuals combat difficulties with paying attention, or other conditions that benefit from immediate feedback.

For example, real-time feedback may help patients train their brains to weaken intrusive memories. While such “brain-training” approaches need additional validation to make sure that the brain is learning new patterns and not just becoming good at doing the training exercise, these feedback approaches offer the potential for new therapies, Cohen said. Real-time analysis of the brain could also help clinicians make diagnoses, he said.

The ability to decode the brain in real time also has applications in basic brain research, said Kenneth Norman, professor of psychology and the Princeton Neuroscience Institute. “As cognitive neuroscientists, we’re interested in learning how the brain gives rise to thinking,” said Norman. “Being able to do this in real time vastly increases the range of science that we can do,” he said.

A clearer window into what people are thinking

Another way the technology can be used is in studies of how we learn. For example, when a person listens to a math lecture, certain neural patterns are activated. Researchers could look at the neural patterns of people who understand the math lecture and see how they differ from neural patterns of someone who isn’t following along as well, according to Norman.

The ongoing collaboration is now focused on improving the technology to obtain a clearer window into what people are thinking about, for example, decoding in real time the specific identity of a face that a person is mentally visualizing.

One of the challenges the computer scientists had to overcome was how to apply machine learning to the type of data generated by brain scans. A face-recognition algorithm can scan hundreds of thousands of photographs to learn how to classify new faces, but the logistics of scanning peoples’ brains are such that researchers usually only have access to a few hundred scans per person.

Although the number of scans is few, each scan contains a rich trove of data. The software divides the brain images into little cubes, each about one millimeter wide. These cubes, called voxels, are analogous to the pixels in a two-dimensional picture. The brain activity in each cube is constantly changing.

To make matters more complex, it is the connections between brain regions that give rise to our thoughts. A typical scan can contain 100,000 voxels, and if each voxel can talk to all the other voxels, the number of possible conversations is immense. And these conversations are changing second by second. The collaboration of Intel and Princeton computer scientists overcame this computational challenge. The effort included Li as well as Barbara Engelhardt, assistant professor of computer science, and Yida Wang, who earned his doctorate in computer science from Princeton in 2016 and now works at Intel Labs.

Prior to the recent progress, it would take researchers months to analyze a data set, said Nicholas Turk-Browne, professor of psychology at Princeton. With the availability of real-time fMRI, a researcher can change the experiment while it is ongoing. “If my hypothesis concerns a certain region of the brain and I detect in real time that my experiment is not engaging that brain region, then we can change what we ask the research volunteer to do to better engage that region, potentially saving precious time and accelerating scientific discovery,” Turk-Browne said.

One eventual goal is to be able to create pictures from people’s thoughts, said Turk-Browne. “If you are in the scanner and you are retrieving a special memory, such as from childhood, we would hope to generate a photograph of that experience on the screen. That is still far off, but we are making good progress.”

Learn more: Princeton-Intel collaboration breaks new ground in studies of the brain


Feb 252017

Duke University researchers have engineered rhodium nanoparticles (blue) that can harness the energy in ultraviolet light and use it to catalyze the conversion of carbon dioxide to methane, a key building block for many types of fuels. Credit: Chad Scales

Illuminated rhodium nanoparticles catalyze key chemical reaction

Duke University researchers have developed tiny nanoparticles that help convert carbon dioxide into methane using only ultraviolet light as an energy source.

Having found a catalyst that can do this important chemistry using ultraviolet light, the team now hopes to develop a version that would run on natural sunlight, a potential boon to alternative energy.

Chemists have long sought an efficient, light-driven catalyst to power this reaction, which could help reduce the growing levels of carbon dioxide in our atmosphere by converting it into methane, a key building block for many types of fuels.

Not only are the rhodium nanoparticles made more efficient when illuminated by light, they have the advantage of strongly favoring the formation of methane rather than an equal mix of methane and undesirable side-products like carbon monoxide. This strong “selectivity” of the light-driven catalysis may also extend to other important chemical reactions, the researchers say.

“The fact that you can use light to influence a specific reaction pathway is very exciting,” said Jie Liu, the George B. Geller professor of chemistry at Duke University. “This discovery will really advance the understanding of catalysis.”

The paper appears online Feb. 23 in Nature Communications.

Despite being one of the rarest elements on Earth, rhodium plays a surprisingly important role in our everyday lives. Small amounts of the silvery grey metal are used to speed up or “catalyze” a number of key industrial processes, including those that make drugs, detergents and nitrogen fertilizer, and they even play a major role breaking down toxic pollutants in the catalytic converters of our cars.

Rhodium accelerates these reactions with an added boost of energy, which usually comes in the form of heat because it is easily produced and absorbed. However, high temperatures also cause problems, like shortened catalyst lifetimes and the unwanted synthesis of undesired products.

In the past two decades, scientists have explored new and useful ways that light can be used to add energy to bits of metal shrunk down to the nanoscale, a field called plasmonics.

“Effectively, plasmonic metal nanoparticles act like little antennas that absorb visible or ultraviolet light very efficiently and can do a number of things like generate strong electric fields,” said Henry Everitt, an adjunct professor of physics at Duke and senior research scientist at the Army’s Aviation and Missile RD&E Center at Redstone Arsenal, AL. “For the last few years there has been a recognition that this property might be applied to catalysis.”

Rhodium nanocubes observed under a transmission electron microscope. Credit: Xiao Zhang

Xiao Zhang, a graduate student in Jie Liu’s lab, synthesized rhodium nanocubes that were the optimal size for absorbing near-ultraviolet light. He then placed small amounts of the charcoal-colored nanoparticles into a reaction chamber and passed mixtures of carbon dioxide and hydrogen through the powdery material.

When Zhang heated the nanoparticles to 300 degrees Celsius, the reaction generated an equal mix of methane and carbon monoxide, a poisonous gas. When he turned off the heat and instead illuminated them with a high-powered ultraviolet LED lamp, Zhang was not only surprised to find that carbon dioxide and hydrogen reacted at room temperature, but that the reaction almost exclusively produced methane.

“We discovered that when we shine light on rhodium nanostructures, we can force the chemical reaction to go in one direction more than another,” Everitt said. “So we get to choose how the reaction goes with light in a way that we can’t do with heat.”

This selectivity — the ability to control the chemical reaction so that it generates the desired product with little or no side-products — is an important factor in determining the cost and feasibility of industrial-scale reactions, Zhang says.

“If the reaction has only 50 percent selectivity, then the cost will be double what it would be if the selectively is nearly 100 percent,” Zhang said. “And if the selectivity is very high, you can also save time and energy by not having to purify the product.”

Now the team plans to test whether their light-powered technique might drive other reactions that are currently catalyzed with heated rhodium metal. By tweaking the size of the rhodium nanoparticles, they also hope to develop a version of the catalyst that is powered by sunlight, creating a solar-powered reaction that could be integrated into renewable energy systems.

“Our discovery of the unique way light can efficiently, selectively influence catalysis came as a result of an on-going collaboration between experimentalists and theorists,” Liu said. “Professor Weitao Yang’s group in the Duke chemistry department provided critical theoretical insights that helped us understand what was happening. This sort of analysis can be applied to many important chemical reactions, and we have only just begun to explore this exciting new approach to catalysis.”


Feb 242017

via The Journal of Geoethical Nanotechnology

Researchers say ‘benevolent bots’, otherwise known as software robots, that are designed to make articles on Wikipedia better often end up having online fights lasting years over changes in content.

Editing bots on Wikipedia undo vandalism, enforce bans, check spelling, create links and import content automatically, whereas other bots (which are non-editing) can mine data, identify data or identify copyright infringements. The team looked at how much disruption they caused on Wikipedia sites, seeing how they interacted on 13 different language editions over ten years (from 2001 to 2010). They found that bots interacted with one another, whether or not this was by design, which led to unpredictable consequences. The research paper, published in PLOS ONE, concludes that bots are more like humans than you might expect as they appear to behave differently in culturally distinct online environments. The paper says the findings are a warning to those using artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media. It suggests that scientists may have to devote more attention to bots’ diverse ‘social life’ and their different cultures.

The research paper by the University of Oxford and the Alan Turing Institute in the UK explains that although the online world has become an ecosystem of bots, our knowledge of how they interact with each other is still rather poor. Although bots are automatons that do not have the capacity for emotions, bot to bot interactions are unpredictable and act in distinctive ways. It finds that German editions of Wikipedia had fewest conflicts between bots, with each undoing another’s edits 24 times, on average, over ten years. This shows relative efficiency, says the research paper, when compared with bots on the Portuguese Wikipedia edition, which undid another bot’s edits 185 times, on average, over ten years. Bots on English Wikipedia undid another bot’s work 105 times, on average, over ten years, three times the rate of human reverts, says the paper.

The findings show that even simple autonomous algorithms can produce complex interactions that result in unintended consequences – ‘sterile fights’ that may continue for years, or reach deadlock in some cases. The paper says while bots constitute a tiny proportion (0.1%) of Wikipedia editors, they stand behind a significant proportion of all edits.  Although such conflicts represent a small proportion of the bots’ overall editorial activity, these findings are significant in highlighting their unpredictability and complexity. Smaller language editions, such as the Polish Wikipedia, have far more content created by bots than the large language editions, such as English Wikipedia.

Lead author Dr Milena Tsvetkova, from the Oxford Internet Institute, said: ‘We find that bots behave differently in different cultural environments and their conflicts are also very different to the ones between human editors. This has implications not only for how we design artificial agents but also for how we study them. We need more research into the sociology of bots.’

Even the same technology leads to different outcomes depending on the cultural environment. An automated vehicle will drive differently on a German autobahn to how it will through the Tuscan hills of Italy. Similarly, local online infrastructure that bots inhabit will have some bearing on how they behave and their performance

Dr Taha Yasseri from the Oxford Internet Institute

The paper was co-authored by the principal investigator of the EC-Horizon2020-funded project, HUMANE, Dr Taha Yasseri, also from the Oxford Internet Institute. He added: ‘The findings show that even the same technology leads to different outcomes depending on the cultural environment.An automated vehicle will drive differently on a German autobahn to how it will through the Tuscan hills of Italy. Similarly, the local online infrastructure that bots inhabit will have some bearing on how they behave and their performance. Bots are designed by humans from different countries so when they encounter one another, this can lead to online clashes. We see differences in the technology used in the different Wikipedia language editions and the different cultures of the communities of Wikipedia editors involved create complicated interactions. This complexity is a fundamental feature that needs to be considered in any conversation related to automation and artificial intelligence.’

Professor Luciano Floridi, also an author of the paper, remarked: ‘We tend to forget that coordination even among collaborative agents is often achieved only through frameworks of rules that facilitate the wanted outcomes. This infrastructural ethics or infra-ethics needs to be designed as much and as carefully as the agents that inhabit it.’

The research finds that the number of reverts is smaller for bots than for humans, but the bots’ behaviour is more varied and conflicts involving bots last longer and are triggered later. The average time between successive reverts for humans is 2 minutes, then 24 hours or one year, says the paper. The first bot to bot revert happened a month later, on average, but further reverts often continued for years. The paper suggests that humans use automatic tools that report live changes and can react more quickly, whereas bots systematically crawl over web articles and they can be restricted on the number of edits allowed. The fact that bots’ conflicts are longstanding also flags that humans are failing to spot the problems early enough, suggests the paper.

Learn more: Computer bots are like humans, having fights lasting years


Feb 242017

Photo: Kota Kumagai, Utsunomiya University

Researchers have developed a completely new type of display that creates 3D images by using a laser to form tiny bubbles inside a liquid “screen.” Instead of rendering a 3D scene on a flat surface, the display itself is three-dimensional, a property known as volumetric. This allows viewers to see a 3D image in the columnar display from all angles without any 3D glasses or headsets.

In The Optical Society’s journal for high impact research, Optica, researchers led by Yoshio Hayasaki of Utsunomiya University, Japan, demonstrated the ability of their volumetric bubble display to create changeable color graphics.

“Creating a full-color updatable volumetric display is challenging because many three-dimensional pixels, or voxels, with different colors have to be formed to make volumetric graphics,” said Kota Kumagai, first author of the paper. “In our display, the microbubble voxels are three-dimensionally generated in a liquid using focused femtosecond laser pulses. The bubble graphics can be colored by changing the color of the illumination light.”

Although the new work is a proof of concept, the technology might one day allow full-color updatable volumetric displays. These types of displays could be used for art or museum exhibits, where viewers can walk all the way around the display. They are also being explored for helping doctors visualize a patient’s anatomy prior to surgery or to let the military study terrain and buildings prior to a mission.

“The volumetric bubble display is most suited for public facilities such as a museum or an aquarium because, currently, the system setup is big and expensive,” said Kumagai. “However, in the future, we hope to improve the size and cost of the laser source and optical devices to create a smaller system that might be affordable for personal use.”

Using lasers to make bubbles

The bubbles for the new display are created by a phenomenon known as multiphoton absorption, which occurs when multiple photons from a femtosecond laser are absorbed at the point where the light is focused. Multiphoton absorption allowed the researchers to create microbubbles at very precise locations by moving the focus of the laser light to various parts of a liquid-filled cuvette that acted as a “screen.” Using a high-viscosity, or thick, liquid prevents the bubbles, once formed, from immediately rising to the top of the liquid.

The bubble graphics are viewable when they scatter light from an external light source such as a halogen lamp or high-power LED. The researchers produced monochromatic images in white, red, blue and green by switching the color of the illuminating LED. They say that illuminating the graphics with a projector could create different colors in different regions of the image.

Rather than creating each bubble one by one, the researchers used a computer-generated hologram to form 3D patterns of laser light that let them control the number and shapes of the microbubble voxels. This approach also increased the amount of light scattered from the microbubbles, making the images brighter.

In the paper, the researchers demonstrate their technique by creating a sequence of 2D bubble images of a mermaid, a 3D rendered bunny, and 2D dolphin graphics in four different colors. They also showed that microbubble formation depends on the irradiation energy of the laser and that the contrast could be modified by changing the number of laser pulses used to irradiate the liquid.

“Our bubble graphics have a wide viewing angle and can be refreshed and colored,” said Kumagai. “Although our first volumetric graphics are on the scale of millimeters, we achieved the first step toward an updatable full-color volumetric display.”

The researchers are now developing a system that would use a stream inside the liquid to burst the bubbles, allowing the image to be changed or cleared. They are also working on methods that could allow the formation of larger graphics, which requires overcoming spherical aberrations caused by the refractive index mismatch between the liquid screen, the glass holding the liquid, and air.

Learn more: Researchers Use Laser-Generated Bubbles to Create 3D Images in Liquid


Feb 232017

The compound (RgIA) in the study was obtained from the venom of Conus regius, the royal cone.

An alternative to opiods? Scientists at the University of Utah have found a compound that blocks pain by targeting a pathway not associated with opioids. Research in rodents indicates that the benefits continue long after the compound have cleared the body.

The findings were reported online in the February 20 issue of the Proceedings of the National Academy of Sciences.

The opioid crisis has reached epidemic proportions. Opioids is highly addictive and according to the Centers for Disease Control and Prevention, 91 Americans die every day from an opioid overdose. The medical community is in need of alternative therapies that do not rely on the opioid pathways to relieve pain.

“Nature has evolved molecules that are extremely sophisticated and can have unexpected applications,” begins Baldomero Olivera, professor in biology at the University of Utah. “We were interested in using venoms to understand different pathways in the nervous system.”

Conus regius, a small marine cone snail common to the Caribbean Sea, packs a venomous punch, capable of paralyzing and killing its prey.

In this study, the researchers found that a compound isolated from snail’s venom, Rg1A, acts on a pain pathway distinct from that targeted by opioid drugs. Using rodent models, the scientists showed that a9a10 nicotinic acetylcholine receptors (nAChR) functions as a pain pathway receptor and that RgIA4 is an effective compound to block this receptor. The pathway adds to a small number of nonopioid-based pathways that could be further developed to treat chronic pain.

Interestingly, the duration of the pain relief is long, greatly outlasting the presence of the compound in the animal’s system.

The compound works its way through the body in 4 hours, but the scientists found the beneficial effects lingered. “We found that the compound was still working 72 hours after the injection, still preventing pain,” said J. Michael McIntosh, professor of psychiatry at the University of Utah Health Sciences. The duration of the outcome may suggest that the snail compound has a restorative effect on some components of the nervous system.

“What is particularly exciting about these results is the aspect of prevention,” said McIntosh. “Once chronic pain has developed, it is difficult to treat. This compound offers a potential new pathway to prevent chronic pain from developing in the first place and also offers a new therapy to patients with established pain who have run out of options.”

The researchers will continue to the next step of pre-clinical testing to investigate the safety and effectiveness of a new drug therapy.

Testing a new nonopioid compound

Previous research had shown that RgIA was effective in rodents, but the scientists wanted to ensure they had a compound that would work in people. To do this, they used synthetic chemistry to engineer 20 analogs of the compound. In essence, the scientists started with a key (RgIA) that fits into a lock (the pain pathway receptor a9a10 nAChR). Using the key as a template, they developed new keys (analogs) with slightly different configurations.

The scientists found one key that best fit the lock: the analog RgIA4 tightly bound to the human receptor.

To test whether the compound relieved pain, the scientists administered it to rodents that were exposed to a chemotherapy drug that causes extreme cold sensitivity, as well as hypersensitivity to touch. “Interactions that are not normally painful, like sheets rubbing against the body or pants against the leg, becomes painful,” said McIntosh.

While the untreated rodents experienced pain after exposure to the chemotherapy drug, rodents given the compound did not experience pain. Nor did rodents that were genetically altered rodents to lack the pain pathway receptor. This work demonstrates that a9a10 nAChR acts as a pain pathway receptor, and that RgIA4 prevents the receptor from being activated.

Most pain medications available today work through a limited number of pathways and are not sufficient to alleviate chronic pain. “RgIA4 works by an entirely new pathway, which opens the door for new opportunities to treat pain,” said McIntosh. “We feel that drugs that work by this pathway may reduce burden of opioid use.”



Feb 232017

Sandia National Laboratories researcher Pylin Sarobol looks at samples of carbide coatings as she stands in front of a deposition chamber. Sarobol and colleagues are working on a process to lay down ceramic coatings kinetically, avoiding the high temperatures that otherwise would be required. (Photo by Randy Montoya)

Room temperature coatings make design, fabrication flexible

Researcher Pylin Sarobol explains an elegant process for ultrafine-grained ceramic coatings in a somewhat inelegant way: sub-micron particles splatting onto a surface.

That splatting action is a key part of a Sandia National Laboratories project to lay down ceramic coatings kinetically. By making high-velocity submicron ceramic particles slam onto surfaces at room temperature, Sarobol and her colleagues avoid the high temperatures otherwise required to process ceramics like alumina and barium titanate.

Coating at room temperature makes microelectronics design and fabrication more flexible and could someday lead to better, less expensive microelectronics components that underpin modern technology. The kinetic process produces nanocrystalline films that are very strong and could be used as protective coatings against wear, corrosion, oxidation and the like.

Sarobol, who works on coatings and additive manufacturing, said it’s difficult to consolidate ceramic coatings and similar hard materials and then integrate them into devices with materials that have relatively low melting temperatures. Because ceramic components are processed at temperatures of about 1,300 degrees Fahrenheit (700 degrees Celsius) or more, it can be difficult to combine them with certain materials that have particular functions within electrical and mechanical devices. For example, current miniature waveguides require micro-machining out a tiny piece of electromagnetic material and gluing it onto another material.

“The ability to put down ceramics at room temperature means you can process ceramics and lower-melting temperature materials at the same time,” said Sarobol, who leads the project, now in its second year. “You can now put ceramics on copper, for example. Before you had to make the ceramics first, then put the copper down on it. This process is really about being able to integrate materials, especially ceramics, with other materials.”

It opens up new possibilities for fabrication — electrical circuits combining hybrid materials or tiny capacitors or sensors. “You can imagine spraying functional materials onto a circuit board rather than high-temperature processing, followed by tedious manual assembly,” Sarobol said.

Taking advantage of kinetic energy and materials properties

Rather than heat, aerosol deposition uses kinetic energy and special material properties found at micro- and nano-scales.

There’s still much to learn about the process. “We really need to spend the time to understand the process parameters, how they relate to the resulting microstructures and to the final material properties that we need,” Sarobol said. “When we think about designing a new device, we need to keep the relationship of structure-processing-properties in mind and allow ourselves time to perform the research, the optimization, and understand how we can make the properties of coatings better.”

Room-temperature microscale coatings won’t be a panacea, however, because the process produces nanocrystalline structures — not ideal for coatings for applications such as micro-actuators, micro-motors or capacitors that need large grain structure for better device function, she said.

“The aerosol-deposited coatings are made up of tiny, 20 nanometer crystals that we often call crystallites or grains,” Sarobol said. “When we heat our coatings, these tiny crystals grow and the properties change. By controlling the crystallite size, we can tune the properties in predictable ways to make more functional devices” for different applications.

Only a few of places in the world work on such room temperature, kinetic coating processes. Sarobol’s initial research came as principal investigator for a two-year project, “Room Temperature Solid-State Deposition of Ceramics,” that ended in March 2016. It led to better understanding of the basic building blocks of coatings and the scientific fundamentals behind the process.

Next comes optimizing the process, expanding the materials that can be fabricated and developing them for potential applications, which could take years.

In a nutshell, this is how it works: In aerosol deposition, a nozzle accelerates submicron particles suspended in a gas toward the surface. Particles impact and stick, building up a coating layer by layer. A key is to use submicron particles (50 times smaller than the diameter of a human hair) that allow researchers to tap into materials properties found only at small scales and activate plastic deformation in the aerosol particles. Plastic deformation, or plasticity, is a way to cause a substance to permanently change size or shape under applied stress. It’s the plasticity of submicron particles that causes consolidation of subsequent deposition layers and generates the continuous surface that layers are built upon.

Another key: deposition in a vacuum, which helps alleviate the effects of reflected gases on the flying particles. Reflection of the high-velocity carrier gas from the deposition substrate can create so-called bow shock, a gas boundary layer that’s difficult for the smallest of particles to penetrate. But in a vacuum, reflected gases are diffused so the bow shock layer is thinner. The smaller particles traveling fast have high momentum and can get through the thin bow shock layer. Without a vacuum, the bow shock layer is large and particles don’t have enough momentum to penetrate to the substrate.

Plastic deformation critical to process

Maintaining the particle kinetic energy through the bow shock layer is critical to achieve material plastic deformation, and without plastic deformation there’s no sticking and no coating.

When a particle impacts the substrate or another layer, it plastically deforms and changes shape by a process known as dislocation nucleation and slip. Sarobol’s team discovered particles have nanofractures that make them “lay down onto a substrate like splatting cookie dough, forming a pancake-shaped grain.”

The next particle that hits and deforms tamps down the original layer, creating an even tighter bond. “So you have both the materials deformation or shape change and fracturing without fragmentation, and finally the tamping from subsequent particles to help build the coating,” Sarobol said.

Those mechanisms make many layers possible, building up coatings that are tens of microns thick. “We have made nickel coatings as thick as 40 microns, and in literature I’ve seen reports of up to about 80 microns for ceramics,” Sarobol said.

Team members have successfully deposited multiple materials using the method, including copper, nickel, aluminum oxide, titanium dioxide, barium titanate and carbide compounds. Likely applications for this short list of materials alone include capacitors, resistors, inductors, electrical contacts and wear surfaces.

An enticing application specific to barium titanate films is electric field management in high-voltage systems. High-voltage capacitors, for example, are prone to failure where the dielectric material (barium titanate) meets the copper electrode and air, creating a three-material junction.

“If you spray on barium titanate at this junction, you open up the possibility of higher power capacitors,” Sarobol said. “There’s much more to do before we achieve good enough properties for that.”

Other researchers are interested in electrical contacts, protective coatings or consolidating brittle and intermetallic compounds for the first time.

The process also spans the microscale gap between two established technologies, thin films and thermal spray technology. Thin films are coating layers, ranging in size from nanometers to a few microns, that can be defined into precision electrical circuits and are patterned via photolithography techniques instead of traditional printed circuit boards. Thermal spray technology can produce coatings starting at about 50 microns up to a few centimeters.

“This can bridge that missing gap, where you can start to deposit hundreds of nanometers of materials up to a hundred microns,” Sarobol said.

Learn more: Sandia using kinetics, not temperature, to make ceramic coatings


Feb 222017

While most of the light concentrated to the edge of the silicon-based luminescent solar concentrator is actually invisible, we can better see the concentration effect by the naked eye when the slab is illuminated by a “black light” which is composed of mostly ultraviolet wavelengths.

Discovery could lower cost and expand possibilities for building-integrated solar energy collection

Researchers at the University of Minnesota and University of Milano-Bicocca are bringing the dream of windows that can efficiently collect solar energy one step closer to reality thanks to high tech silicon nanoparticles.

The researchers developed technology to embed the silicon nanoparticles into what they call efficient luminescent solar concentrators (LSCs). These LSCs are the key element of windows that can efficiently collect solar energy. When light shines through the surface, the useful frequencies of light are trapped inside and concentrated to the edges where small solar cells can be put in place to capture the energy.

The research is published today in Nature Photonics, a peer-reviewed scientific journal published by the Nature Publishing Group.

Windows that can collect solar energy, called photovoltaic windows, are the next frontier in renewable energy technologies, as they have the potential to largely increase the surface of buildings suitable for energy generation without impacting their aesthetics—a crucial aspect, especially in metropolitan areas. LSC-based photovoltaic windows do not require any bulky structure to be applied onto their surface and since the photovoltaic cells are hidden in the window frame, they blend invisibly into the built environment.

The idea of solar concentrators and solar cells integrated into building design has been around for decades, but this study included one key difference—silicon nanoparticles. Until recently, the best results had been achieved using relatively complex nanostructures based either on potentially toxic elements, such as cadmium or lead, or on rare substances like indium, which is already massively utilized for other technologies. Silicon is abundant in the environment and non-toxic. It also works more efficiently by absorbing light at different wavelengths than it emits. However, silicon in its conventional bulk form, does not emit light or luminesce.

“In our lab, we ‘trick’ nature by shrinking the dimension of silicon crystals to a few nanometers, that is about one ten-thousandths of the diameter of human hair,” said University of Minnesota mechanical engineering professor Uwe Kortshagen, inventor of the process for creating silicon nanoparticles and one of the senior authors of the study. “At this size, silicon’s properties change and it becomes an efficient light emitter, with the important property not to re-absorb its own luminescence. This is the key feature that makes silicon nanoparticles ideally suited for LSC applications.”

Using the silicon nanoparticles opened up many new possibilities for the research team.

“Over the last few years, the LSC technology has experienced rapid acceleration, thanks also to pioneering studies conducted in Italy, but finding suitable materials for harvesting and concentrating solar light was still an open challenge,” said Sergio Brovelli, physics professor at the University of Milano-Bicocca, co-author of the study, and co-founder of the spin-off company Glass to Power that is industrializing LSCs for photovoltaic windows “Now, it is possible to replace these elements with silicon nanoparticles.”

Researchers say the optical features of silicon nanoparticles and their nearly perfect compatibility with the industrial process for producing the polymer LSCs create a clear path to creating efficient photovoltaic windows that can capture more than 5 percent of the sun’s energy at unprecedented low costs.

“This will make LSC-based photovoltaic windows a real technology for the building-integrated photovoltaic market without the potential limitations of other classes of nanoparticles based on relatively rare materials,” said Francesco Meinardi, physics professor at the University of Milano-Bicocca and one of the first authors of the paper.

The silicon nanoparticles are produced in a high-tech process using a plasma reactor and formed into a powder.

“Each particle is made up of less than two thousand silicon atoms,” said Samantha Ehrenberg, a University of Minnesota mechanical Ph.D. student and another first author of the study. “The powder is turned into an ink-like solution and then embedded into a polymer, either forming a sheet of flexible plastic material or coating a surface with a thin film.”

The University of Minnesota invented the process for creating silicon nanoparticles about a dozen years ago and holds a number of patents on this technology. In 2015, Kortshagen met Brovelli, who is an expert in LSC fabrication and had already demonstrated various successful approaches to efficient LSCs based on other nanoparticle systems. The potential of silicon nanoparticles for this technology was immediately clear and the partnership was born. The University of Minnesota produced the particles and researchers in Italy fabricated the LSCs by embedding them in polymers through an industrial based method, and it worked.

“This was truly a partnership where we gathered the best researchers in their fields to make an old idea truly successful,” Kortshagen said. “We had the expertise in making the silicon nanoparticles and our partners in Milano had expertise in fabricating the luminescent concentrators. When it all came together, we knew we had something special.”

Learn more: Dream of energy-collecting windows is one step closer to reality


Feb 222017

Stanford’s Jaimie Henderson and Krishna Shenoy are part of a consortium working on an investigational brain-to-computer hookup.
Paul Sakuma

In a Stanford-led research report, three participants with movement impairment controlled an onscreen cursor simply by imagining their own hand movements.

A clinical research publication led by Stanford University investigators has demonstrated that a brain-to-computer hookup can enable people with paralysis to type via direct brain control at the highest speeds and accuracy levels reported to date.

The report involved three study participants with severe limb weakness — two from amyotrophic lateral sclerosis, also called Lou Gehrig’s disease, and one from a spinal cord injury. They each had one or two baby-aspirin-sized electrode arrays placed in their brains to record signals from the motor cortex, a region controlling muscle movement. These signals were transmitted to a computer via a cable and translated by algorithms into point-and-click commands guiding a cursor to characters on an onscreen keyboard.

Each participant, after minimal training, mastered the technique sufficiently to outperform the results of any previous test of brain-computer interfaces, or BCIs, for enhancing communication by people with similarly impaired movement. Notably, the study participants achieved these typing rates without the use of automatic word-completion assistance common in electronic keyboarding applications nowadays, which likely would have boosted their performance.

One participant, Dennis Degray of Menlo Park, California, was able to type 39 correct characters per minute, equivalent to about eight words per minute.

‘A major milestone’

This point-and-click approach could be applied to a variety of computing devices, including smartphones and tablets, without substantial modifications, the Stanford researchers said.

“Our study’s success marks a major milestone on the road to improving quality of life for people with paralysis,” said Jaimie Henderson, MD, professor of neurosurgery, who performed two of the three device-implantation procedures at Stanford Hospital. The third took place at Massachusetts General Hospital.

Henderson and Krishna Shenoy, PhD, professor of electrical engineering, are co-senior authors of the study, which was published online Feb. 21 in eLife. The lead authors are former postdoctoral scholar Chethan Pandarinath, PhD, and postdoctoral scholar Paul Nuyujukian, MD, PhD, both of whom spent well over two years working full time on the project at Stanford.

“This study reports the highest speed and accuracy, by a factor of three, over what’s been shown before,” said Shenoy, a Howard Hughes Medical Institute investigator who’s been pursuing BCI development for 15 years and working with Henderson since 2009. “We’re approaching the speed at which you can type text on your cellphone.”

“The performance is really exciting,” said Pandarinath, who now has a joint appointment at Emory University and the Georgia Institute of Technology as an assistant professor of biomedical engineering. “We’re achieving communication rates that many people with arm and hand paralysis would find useful. That’s a critical step for making devices that could be suitable for real-world use.”

Shenoy’s lab pioneered the algorithms used to decode the complex volleys of electrical signals fired by nerve cells in the motor cortex, the brain’s command center for movement, and convert them in real time into actions ordinarily executed by spinal cord and muscles.

“These high-performing BCI algorithms’ use in human clinical trials demonstrates the potential for this class of technology to restore communication to people with paralysis,” said Nuyujukian.

Life-changing accident

Millions of people with paralysis reside in the United States. Sometimes their paralysis comes gradually, as occurs in ALS. Sometimes it arrives suddenly, as in Degray’s case.

Now 64, Degray became quadriplegic on Oct. 10, 2007, when he fell and sustained a life-changing spinal-cord injury. “I was taking out the trash in the rain,” he said. Holding the garbage in one hand and the recycling in the other, he slipped on the grass and landed on his chin. The impact spared his brain but severely injured his spine, cutting off all communication between his brain and musculature from the head down.

“I’ve got nothing going on below the collarbones,” he said.

Degray received two device implants at Henderson’s hands in August 2016. In several ensuing research sessions, he and the other two study participants, who underwent similar surgeries, were encouraged to attempt or visualize patterns of desired arm, hand and finger movements. Resulting neural signals from the motor cortex were electronically extracted by the embedded recording devices, transmitted to a computer and translated by Shenoy’s algorithms into commands directing a cursor on an onscreen keyboard to participant-specified characters.

The researchers gauged the speeds at which the patients were able to correctly copy phrases and sentences — for example, “The quick brown fox jumped over the lazy dog.” Average rates were 7.8 words per minute for Degray and 6.3 and 2.7 words per minute, respectively, for the other two participants.

A tiny silicon chip

The investigational system used in the study, an intracortical brain-computer interface called the BrainGate Neural Interface System*, represents the newest generation of BCIs. Previous generations picked up signals first via electrical leads placed on the scalp, then by being surgically positioned at the brain’s surface beneath the skull.

An intracortical BCI uses a tiny silicon chip, just over one-sixth of an inch square, from which protrude 100 electrodes that penetrate the brain to about the thickness of a quarter and tap into the electrical activity of individual nerve cells in the motor cortex.

This is like one of the coolest video games I’ve ever gotten to play with.

Henderson likened the resulting improved resolution of neural sensing, compared with that of older-generation BCIs, to that of handing out applause meters to individual members of a studio audience rather than just stationing them on the ceiling, “so you can tell just how hard and how fast each person in the audience is clapping.”

Shenoy said the day will come — closer to five than 10 years from now, he predicted —when a self-calibrating, fully implanted wireless system can be used without caregiver assistance, has no cosmetic impact and can be used around the clock.

“I don’t see any insurmountable challenges.” he said. “We know the steps we have to take to get there.”

Degray, who continues to participate actively in the research, knew how to type before his accident but was no expert at it. He described his newly revealed prowess in the language of a video game aficionado.

“This is like one of the coolest video games I’ve ever gotten to play with,” he said. “And I don’t even have to put a quarter in it.”

The study’s results are the culmination of a long-running collaboration between Henderson and Shenoy and a multi-institutional consortium called BrainGate. Leigh Hochberg, MD, PhD, a neurologist and neuroscientist at Massachusetts General Hospital, Brown University and the VA Rehabilitation Research and Development Center for Neurorestoration and Neurotechnology in Providence, Rhode Island, directs the pilot clinical trial of the BrainGate system and is a study co-author.

“This incredible collaboration continues to break new ground in developing powerful, intuitive, flexible neural interfaces that we all hope will one day restore communication, mobility and independence for people with neurologic disease or injury,” said Hochberg.

Learn more: Brain-computer interface advance allows fast, accurate typing by people with paralysis


Feb 222017

via University of Pennsylvania

When John Crocker, a professor of chemical and biomolecular engineering in the University of Pennsylvania’s School of Engineering and Applied Science was a graduate student, his advisor gathered together everyone in his lab to “throw down the gauntlet” on a new challenge in the field.

Someone had predicted that if one could grow colloidal crystals that had the same structure as carbon atoms in a diamond structure, it would have special optical properties that could revolutionize photonics. In this material, called a photonic bandgap material, or PBM, light would act in a way mathematically analogous to how electrons move in a semi-conductor.

John Crocker
John Crocker

“The technological implication is that such materials would allow for the construction of ‘transistors’ for light, the ability to trap light at specific locations and build microcircuits for light and more efficient LEDs and lasers,” Crocker said.

At the time, Crocker decided to pursue his own projects, leaving the pursuit of PBMs to others.

Twenty years later, Crocker’s own graduate student Yifan Wang produced this elusive diamond structure while working on a different problem, serendipitously. This put them on the path to achieving PBMs, the “holy grail of directed particle self-assembly,” Crocker said.

“It’s a classic story of serendipity in scientific discovery. You can’t anticipate these things. You just get lucky sometimes and something amazing comes out.”

The research was led by Crocker, Wang, professor Talid Sinno of SEAS and graduate student Ian Jenkins. The results have been published in Nature Communications.

Yifan Wang
Yifan Wang

To be a PBM, a material needs to have a crystal-like structure not on the scale of atoms but on the lengthscale of the light wavelength.

“In other words,” Crocker said, “you need to sculpt or arrange some transparent material into an array of spheres with a particular symmetry, and the spheres or holes need to be hundreds of nanometers in size.”

Back in the 1990s, Crocker said, scientists believed there would be a lot of different possible ways to arrange the spheres and grow the needed structure using colloid crystals similar to how crystals of semi-conductors are grown: colloidal spheres spontaneously arranging themselves into different crystal lattices.

Opals are a natural example of this. They are formed when silica in groundwater forms microscopic spheres, which crystallize underground and then become fossilized in solids.

Talid Sinno
Talid Sinno

Although opals don’t have the right symmetry to be PBMs, their iridescent appearance results from their periodic crystal structure being on scales comparable to the wavelength of light.

To form a PBM, the major goal is to arrange transparent microscopic spheres into a 3-D pattern that mimics the atomic arrangement of carbon atoms in a diamond lattice. This structure, unlike other crystals, lacks certain symmetry directions of other crystals where light can behave normally, allowing the diamond structure to maintain the PBM effect.

Scientists assumed they would be able to make synthetic opals with different structures using different materials to produce PBMs. But this proved more difficult than they had thought and, 20 years later, it still hasn’t been accomplished.

To finally create these diamond lattices, the Penn researchers used DNA-covered microspheres in two slightly different sizes.

“These spontaneously form colloidal crystals when incubated at the correct temperature, due to the DNA forming bridges between the particles,” Crocker said. “Under certain conditions, the crystals have a double diamond structure, two interpenetrating diamond lattices, each made up by one size or ‘flavor’ of particle.”

They then crosslinked these crystals together into a solid.

Crocker describes the achievement as good luck. The researchers hadn’t set out to create this diamond structure. They had been doing a “mix and pray” experiment: Wang was adjusting five material variables to explore the parameter space. To date, this has produced 11 different crystals, one of which was the surprising double diamond structure.

“Often times when something unexpected happens, it opens up a door to a new technological approach,” Sinno said. “There could be new physics as opposed to dusty old textbook physics.”

Now that they’ve cleared a significant hurdle on the path to creating PBMs, the researchers need to figure out how to switch out the materials for high index particles and selectively dissolve one species to leave them with one self-assembled diamond lattice of colloidal microspheres.

If able to successfully produce a PBM, the material would be like a “semi-conductor for light,” having unusual optical properties that don’t exist in any natural materials. Normal transparent materials have an index of refraction between 1.3 and 2.5. These PBMs could have a very high index of refraction, or even a negative index of refraction that refracts light backwards.

Such materials could be used to make lenses, cameras and microscopes with better performance, or possibly even “invisibility cloaks,” solid objects that would redirect all light rays around a central compartment, rendering objects there invisible.

Although the researchers have been able to reproduce this experimentally more than a dozen times, Sinno and Jenkins have been unable to reproduce the findings in simulation. It’s the only structure of the 11 crystals that Wang produced that they haven’t been able to replicate in simulation.

“This is the one structure we’ve found so far that we can’t explain which is probably not unrelated to the fact that nobody predicted that you could form it with this system,” Sinno said. “There are several other papers we’ve had in the past that really show how powerful our approaches are in explaining everything. In a way, the fact that none of this worked adds evidence that something fundamentally different is taking place here.”

The researchers currently think that a different, unknown crystal grows and then transforms into the double diamond crystals, but this idea has proven difficult to confirm.

“You’re used to writing papers when you understand something,” Crocker said. “So we had a dilemma. Normally when we find something we chew on it for a while, we do simulations and then when it all makes sense we write it up. In this case, we had to triple-check everything and then make a judgment call to say that this is an exciting discovery and other people beyond us can also work on this and think about and help us try to solve this mystery.”

Learn more: Penn Engineers Overcome a Hurdle in Growing a Revolutionary Optical Metamaterial


Feb 222017


DNA, the stuff of life, may very well also pack quite the jolt for engineers trying to advance the development of tiny, low-cost electronic devices.

Much like flipping your light switch at home—only on a scale 1,000 times smaller than a human hair—an ASU-led team has now developed the first controllable DNA switch to regulate the flow of electricity within a single, atomic-sized molecule. The new study, led by ASU Biodesign Institute researcher Nongjian Tao, was published in the advanced online journal Nature Communications ( DOI: 10.1038/ncomms14471).

“It has been established that charge transport is possible in DNA, but for a useful device, one wants to be able to turn the charge transport on and off. We achieved this goal by chemically modifying DNA,” said Tao, who directs the Biodesign Center for Bioelectronics and Biosensors and is a professor in the Fulton Schools of Engineering. “Not only that, but we can also adapt the modified DNA as a probe to measure reactions at the single-molecule level. This provides a unique way for studying important reactions implicated in disease, or photosynthesis reactions for novel renewable energy applications.”

Engineers often think of electricity like water, and the research team’s new DNA switch acts to control the flow of electrons on and off, just like water coming out of a faucet.

Previously, Tao’s research group had made several discoveries to understand and manipulate DNA to more finely tune the flow of electricity through it. They found they could make DNA behave in different ways — and could cajole electrons to flow like waves according to quantum mechanics, or “hop” like rabbits in the way electricity in a copper wire works –creating an exciting new avenue for DNA-based, nano-electronic applications.

Tao assembled a multidisciplinary team for the project, including ASU postdoctoral student Limin Xiang and Li Yueqi performing bench experiments, Julio Palma working on the theoretical framework, with further help and oversight from collaborators Vladimiro Mujica (ASU) and Mark Ratner (Northwestern University).

To accomplish their engineering feat, Tao’s group, modified just one of DNA’s iconic double helix chemical letters, abbreviated as A, C, T or G, with another chemical group, called anthraquinone (Aq). Anthraquinone is a three-ringed carbon structure that can be inserted in between DNA base pairs but contains what chemists call a redox group (short for reduction, or gaining electrons or oxidation, losing electrons).

These chemical groups are also the foundation for how our bodies’ convert chemical energy through switches that send all of the electrical pulses in our brains, our hearts and communicate signals within every cell that may be implicated in the most prevalent diseases.

The modified Aq-DNA helix could now help it perform the switch, slipping comfortably in between the rungs that make up the ladder of the DNA helix, and bestowing it with a new found ability to reversibly gain or lose electrons.

Through their studies, when they sandwiched the DNA between a pair of electrodes, they careful controlled their electrical field and measured the ability of the modified DNA to conduct electricity. This was performed using a staple of nano-electronics, a scanning tunneling microscope, which acts like the tip of an electrode to complete a connection, being repeatedly pulled in and out of contact with the DNA molecules in the solution like a finger touching a water droplet.

“We found the electron transport mechanism in the present anthraquinone-DNA system favors electron “hopping” via anthraquinone and stacked DNA bases,” said Tao. In addition, they found they could reversibly control the conductance states to make the DNA switch on (high-conductance) or switch-off (low conductance). When anthraquinone has gained the most electrons (its most-reduced state), it is far more conductive, and the team finely mapped out a 3-D picture to account for how anthraquinone controlled the electrical state of the DNA.

For their next project, they hope to extend their studies to get one step closer toward making DNA nano-devices a reality.

“We are particularly excited that the engineered DNA provides a nice tool to examine redox reaction kinetics, and thermodynamics the single molecule level,” said Tao.

Learn more: Switched-on DNA


Feb 212017

Yuanxun Wang/UCLA
SSDL circulator
The key to the UCLA design is an approach called “sequentially switched delay lines,” which is similar to the way transportation engineers route passenger trains from one track to another.

Device being developed by UCLA engineers could ease the traffic of cellphone signals

Mobile phones and computers use electromagnetic waves to send and receive information — they’re what enable our devices to upload photos and download apps. But there is only a limited amount of bandwidth available on the electromagnetic spectrum.

Engineers have envisioned that enabling wireless devices to send and receive information on the same frequency would be one way to overcome that limitation. But that approach posed its own challenge, because incoming and outgoing waves on the same frequency typically interfere with each other. (That’s why, for example, radio stations that use the same frequency disrupt each other’s signals when a radio is close enough to both of them.)

A new design developed by UCLA electrical engineers could solve that problem. The researchers proved that a circulator — a tiny device that sends and receives electromagnetic waves from different ports — that shared the same antenna could enable signals to be sent and received simultaneously. Sending signals on the same frequencies that they are received could essentially double the space on the spectrum available for chips to transfer data.

A paper about the work was published in Scientific Reports, an open-access journal published by Nature.

Previous generations of circulators used magnetic material, which cannot be incorporated into current microchips and doesn’t have enough bandwidth for today’s smartphones and other devices. The UCLA prototype uses coaxial cables to route the electromagnetic waves through non-magnetic material, but the device would ultimately be likely to be built with silicon-based or other semiconductor materials.

The key to the design is an approach called “sequentially switched delay lines,” which is similar to the way transportation engineers route passenger trains from one track to another, to allow multiple trains to enter and exit train stations at the same time and avoid collisions, even if there are only a few available tracks.

“In a busy train station, trains are actively switched onto and off of tracks to minimize the time they might be stopped to get into and out of the station,” said Yuanxun “Ethan” Wang, an associate professor of electrical engineering at the UCLA Henry Samueli School of Engineering and Applied Science who led the research. “This is the same idea, only with electromagnetic waves of the same frequency carrying information inside a chip.”

Lead author Mathew Biedka and co-author Rui Zhu are UCLA doctoral students advised by Wang, and co-author Qiang “Mark” Xu is a postdoctoral scholar in Wang’s laboratory.

The team demonstrated its concept using commercially available parts, and is now testing it on specially fabricated chips.

The design includes six transmission lines, all of equal lengths, connected by five switches. The switches are turned on and off sequentially to distribute electromagnetic waves and allow simultaneous transmission and reception of data-carrying signals.

Previous studies have demonstrated that signals could be sent and received simultaneously using the same electromagnetic frequency, including one led by led by Wang in 2014, which modulated the signals. But, according to the researchers, the new design is the first one that offers unlimited bandwidth.

It could easily be incorporated into current chip manufacturing processes and within almost all industry-standard designs. Previous concepts would have required the use of components that don’t align with current industry standards, or have only worked in a narrow band of the spectrum. Wang said the new UCLA circulator works from the lowest of frequencies up to radio frequencies, and might even work in the visible light part of the spectrum.

“Just like a capacitor or a resistor, a device capable of routing electromagnetic waves is a fundamental building block in almost any circuit,” Wang said. “Making it available with unlimited bandwidth would trigger a revolution in design of mobile phones, automobile sensors or even quantum computers.”

Learn more: Design for new electromagnetic wave router offers unlimited bandwidth


Feb 212017

via SFU

Long-distance couples can share a walk, watch movies together, and even give each other a massage, using new technologies being developed in Carman Neustaedter’s Simon Fraser University lab.

It’s all about feeling connected, says Neustaedter, an associate professor in SFU’s School of Interactive Arts and Technology (SIAT). Student researchers in his Surrey campus-based Connections Lab are working on myriad solutions.

Among them, researchers have designed a pair of interconnected gloves called Flex-N-Feel. When fingers ‘flex’ in one glove, the actions are transmitted to a remote partner wearing the other. The glove’s tactile sensors allow the wearer to ‘feel’ the movements.

To capture the flex actions, the sensors are attached to a microcontroller. The sensors provide a value for each bend, and are transmitted to the ‘feel’ glove using a WiFi module.

The sensors are also placed strategically on the palm side of the fingers in order to better feel the touch. A soft-switch on both gloves also allows either partner to initiate the touch.

“Users can make intimate gestures such as touching the face, holding hands, and giving a hug,” says Neustaedter. “The act of bending or flexing one’s finger is a gentle and subtle way to mimic touch.”

The gloves are currently a prototype and testing continues. While one set of gloves enables one-way remote touch between partners, Neustaedter says a second set could allow both to share touches at the same time.

Other projects also focus on shared experiences, including a virtual reality video conferencing system that lets one “see through the eyes” of a remote partner, and another that enables users to video-stream a remote partner’s activities to a long-distance partner at home (called Be With Me).

Meanwhile the researchers are also studying how next-generation telepresence robots can help unite couples and participate in activities together.

They’ve embedded a robot, designed by Suitable Technologies, into several Vancouver homes. There, it connects to countries around the world, including India and Singapore. Researchers continue to monitor how the robot is used. One long-distance couple plans a Valentine’s Day ‘date’ while one partner is in Vancouver, and the other, on Vancouver Island.

“The focus here is providing that connection, and in this case, a kind of physical body,” says Neustaedter, who has designed and built eight next-generation telepresence systems for families, and is author of Connecting Families: The Impact of New Communication Technologies on Domestic Life (2012). He has also spent more than a decade studying workplace collaborations over distance, including telepresence attendance at international conferences.

“Long-distance relationships are more common today, but distance don’t have to mean missing out on having a physical presence and sharing space,” says Neustaedter. “If people can’t physically be together, we’re hoping to create the next best technological solutions.”

Learn more: Technology puts ‘touch’ into long-distance relationships


Feb 202017

Georgia Tech researchers have demonstrated a CHAMP reactor, which uses the four-stroke engine cycle to create hydrogen while simultaneously capturing carbon dioxide emission. (Credit: Candler Hobbs, Georgia Tech)

When is an internal combustion engine not an internal combustion engine? When it’s been transformed into a modular reforming reactor that could make hydrogen available to power fuel cells wherever there’s a natural gas supply available.

By adding a catalyst, a hydrogen separating membrane and carbon dioxide sorbent to the century-old four-stroke engine cycle, researchers have demonstrated a laboratory-scale hydrogen reforming system that produces the green fuel at relatively low temperature in a process that can be scaled up or down to meet specific needs. The process could provide hydrogen at the point of use for residential fuel cells or neighborhood power plants, electricity and power production in natural-gas powered vehicles, fueling of municipal buses or other hydrogen-based vehicles, and supplementing intermittent renewable energy sources such as photovoltaics.

Known as the CO2/H2 Active Membrane Piston (CHAMP) reactor, the device operates at temperatures much lower than conventional steam reforming processes, consumes substantially less water and could also operate on other fuels such as methanol or bio-derived feedstock. It also captures and concentrates carbon dioxide emissions, a by-product that now lacks a secondary use – though that could change in the future.

Unlike conventional engines that run at thousands of revolutions per minute, the reactor operates at only a few cycles per minute – or more slowly – depending on the reactor scale and required rate of hydrogen production. And there are no spark plugs because there’s no fuel combusted.

“We already have a nationwide natural gas distribution infrastructure, so it’s much better to produce hydrogen at the point of use rather than trying to distribute it,” said Andrei Fedorov, a Georgia Institute of Technology professor who’s been working on CHAMP since 2008. “Our technology could produce this fuel of choice wherever natural gas is available, which could resolve one of the major challenges with the hydrogen economy.”

A paper published February 9 in the journal Industrial & Engineering Chemistry Research describes the operating model of the CHAMP process, including a critical step of internally adsorbing carbon dioxide, a byproduct of the methane reforming process, so it can be concentrated and expelled from the reactor for capture, storage or utilization.

Other implementations of the system have been reported as thesis work by three Georgia Tech Ph.D. graduates since the project began in 2008. The research was supported by the National Science Foundation, the Department of Defense through NDSEG fellowships, and the U.S. Civilian Research & Development Foundation (CRDF Global).

Key to the reaction process is the variable volume provided by a piston rising and falling in a cylinder. As with a conventional engine, a valve controls the flow of gases into and out of the reactor as the piston moves up and down. The four-stroke system works like this:

  • Natural gas (methane) and steam are drawn into the reaction cylinder through a valve as the piston inside is lowered. The valve closes once the piston reaches the bottom of the cylinder.
  • The piston rises into the cylinder, compressing the steam and methane as the reactor is heated. Once it reaches approximately 400 degrees Celsius, catalytic reactions take place inside the reactor, forming hydrogen and carbon dioxide. The hydrogen exits through a selective membrane, and the pressurized carbon dioxide is adsorbed by the sorbent material, which is mixed with the catalyst.
  • Once the hydrogen has exited the reactor and carbon dioxide is tied up in the sorbent, the piston is lowered, reducing the volume (and pressure) in the cylinder. The carbon dioxide is released from the sorbent into the cylinder.
  • The piston is again moved up into the chamber and the valve opens, expelling the concentrated carbon dioxide and clearing the reactor for the start of a new cycle.

“All of the pieces of the puzzle have come together,” said Fedorov, a professor in Georgia Tech’s George W. Woodruff School of Mechanical Engineering. “The challenges ahead are primarily economic in nature. Our next step would be to build a pilot-scale CHAMP reactor.”

The project was begun to address some of the challenges to the use of hydrogen in fuel cells. Most hydrogen used today is produced in a high-temperature reforming process in which methane is combined with steam at about 900 degrees Celsius. The industrial-scale process requires as many as three water molecules for every molecule of hydrogen, and the resulting low density gas must be transported to where it will be used.

Fedorov’s lab first carried out thermodynamic calculations suggesting that the four-stroke process could be modified to produce hydrogen in relatively small amounts where it would be used. The goals of the research were to create a modular reforming process that could operate at between 400 and 500 degrees Celsius, use just two molecules of water for every molecule of methane to produce four hydrogen molecules, be able to scale down to meet the specific needs, and capture the resulting carbon dioxide for potential utilization or sequestration.

“We wanted to completely rethink how we designed reactor systems,” said Fedorov. “To gain the kind of efficiency we needed, we realized we’d need to dynamically change the volume of the reactor vessel. We looked at existing mechanical systems that could do this, and realized that this capability could be found in a system that has had more than a century of improvements: the internal combustion engine.”

The CHAMP system could be scaled up or down to produce the hundreds of kilograms of hydrogen per day required for a typical automotive refueling station – or a few kilograms for an individual vehicle or residential fuel cell, Fedorov said. The volume and piston speed in the CHAMP reactor can be adjusted to meet hydrogen demands while matching the requirements for the carbon dioxide sorbent regeneration and separation efficiency of the hydrogen membrane. In practical use, multiple reactors would likely be operated together to produce a continuous stream of hydrogen at a desired production level.

“We took the conventional chemical processing plant and created an analog using the magnificent machinery of the internal combustion engine,” Fedorov said. “The reactor is scalable and modular, so you could have one module or a hundred of modules depending on how much hydrogen you needed. The processes for reforming fuel, purifying hydrogen and capturing carbon dioxide emission are all combined into one compact system.”

Learn more: Four-Stroke Engine Cycle Produces Hydrogen from Methane and Captures CO2