Georgia Institute of Technology

The Georgia Institute of Technology (commonly referred to as Georgia Tech, Tech, or GT) is a public research university in Atlanta, Georgia, in the United States.

The Latest from:

A magnetic shape memory polymer that uses magnetic fields to transform into a variety of shapes

A team of researchers from the Georgia Institute of Technology and The Ohio State University has developed a soft polymer material, called magnetic shape memory polymer, that uses magnetic fields to transform into a variety of shapes. The material could enable a range of new applications from antennas that change frequencies on the fly to

A magnetic shape memory polymer that uses magnetic fields to transform into a variety of shapes

Monitoring heart failure with a medical monitoring bathroom scale

“Good morning. Bill. Please. Step onto the scale. Touch the metal pads.” The device records an electrocardiogram from Bill’s fingers and – more importantly – circulation pulsing that makes his body subtly bob up and down on the scale. Machine learning tools compute that Bill’s heart failure symptoms have worsened. This is how researchers at

Monitoring heart failure with a medical monitoring bathroom scale

The emerging field of DNA mechanotechnology

Just as the steam engine set the stage for the Industrial Revolution, and micro transistors sparked the digital age, nanoscale devices made from DNA are opening up a new era in bio-medical research and materials science. The journal Science describes the emerging uses of DNA mechanical devices in a “Perspective” article by Khalid Salaita, a professor of chemistry

The emerging field of DNA mechanotechnology

Robots built entirely from smaller robots known as smarticles

A U.S. Army project took a new approach to developing robots — researchers built robots entirely from smaller robots known as smarticles, unlocking the principles of a potentially new locomotion technique. Researchers at Georgia Institute of Technology and Northwestern University published their findings in the journal Science Robotics. The research could lead to robotic systems capable

Robots built entirely from smaller robots known as smarticles

Monitoring the healing of cerebral aneurysms with a stretchable wireless monitor in the brain

A wireless sensor small enough to be implanted in the blood vessels of the human brain could help clinicians evaluate the healing of aneurysms — bulges that can cause death or serious injury if they burst. The stretchable sensor, which operates without batteries, would be wrapped around stents or diverters implanted to control blood flow

Monitoring the healing of cerebral aneurysms with a stretchable wireless monitor in the brain

A device material that self-destructs after a military mission

A polymer that self-destructs? While once a fictional idea, new polymers now exist that are rugged enough to ferry packages or sensors into hostile territory and vaporize immediately upon a military mission’s completion. The material has been made into a rigid-winged glider and a nylon-like parachute fabric for airborne delivery across distances of a hundred

A device material that self-destructs after a military mission

Using connected cars to gridlock entire cities

In the year 2026, at rush hour, your self-driving car abruptly shuts down right where it blocks traffic. You climb out to see gridlock down every street in view, then a news alert on your watch tells you that hackers have paralyzed all Manhattan traffic by randomly stranding internet-connected cars. Flashback to July 2019, the

Using connected cars to gridlock entire cities

Tiny 3D-printed robot that uses vibrations to move

Researchers have created a new type of tiny 3D-printed robot that moves by harnessing vibration from piezoelectric actuators, ultrasound sources or even tiny speakers. Swarms of these “micro-bristle-bots” might work together to sense environmental changes, move materials – or perhaps one day repair injuries inside the human body. The prototype robots respond to different vibration

Tiny 3D-printed robot that uses vibrations to move

The Latest Bing News on:
Georgia Institute of Technology Research
The Latest Google Headlines on:
Georgia Institute of Technology Research
The Latest Bing News on:
Georgia Institute of Technology Discovery
The Latest Google Headlines on:
Georgia Institute of Technology Discovery

Artificial Intelligence Course Creates AI Teaching Assistant

via eng-cs.syr.edu

via eng-cs.syr.edu

Students didn’t know their TA was a computer

College of Computing Professor Ashok Goel teaches Knowledge Based Artificial Intelligence (KBAI) every semester. It’s a core requirement of Georgia Tech’s online master’s of science in computer science program. And every time he offers it, Goel estimates, his 300 or so students post roughly 10,000 messages in the online forums — far too many inquiries for him and his eight teaching assistants (TA) to handle.

That’s why Goel added a ninth TA this semester. Her name is Jill Watson, and she’s unlike any other TA in the world. In fact, she’s not even a “she.” Jill is a computer — a virtual TA —implemented on IBM’s Watson platform.

“The world is full of online classes, and they’re plagued with low retention rates,” Goel said. “One of the main reasons many students drop out is because they don’t receive enough teaching support. We created Jill as a way to provide faster answers and feedback.”

Goel and his team of Georgia Tech graduate students started to build her last year. They contacted Piazza, the course’s online discussion forum, to track down all the questions that had ever been asked in KBAI since the class was launched in fall 2014 (about 40,000 postings in all). Then they started to feed Jill the questions and answers.

“One of the secrets of online classes is that the number of questions increases if you have more students, but the number of different questions doesn’t really go up,” Goel said. “Students tend to ask the same questions over and over again.”

That’s an ideal situation for the Watson platform, which specializes in answering questions with distinct, clear solutions. The team wrote code that allows Jill to field routine questions that are asked every semester. For example, students consistently ask where they can find particular assignments and readings.

Jill wasn’t very good for the first few weeks after she started in January, often giving odd and irrelevant answers. Her responses were posted in a forum that wasn’t visible to students.

“Initially her answers weren’t good enough because she would get stuck on keywords,” said Lalith Polepeddi, one of the graduate students who co-developed the virtual TA. “For example, a student asked about organizing a meet-up to go over video lessons with others, and Jill gave an answer referencing a textbook that could supplement the video lessons — same keywords — but different context. So we learned from mistakes like this one, and gradually made Jill smarter.”

After some tinkering by the research team, Jill found her groove and soon was answering questions with 97 percent certainty. When she did, the human TAs would upload her responses to the students. By the end of March, Jill didn’t need any assistance: She wrote the class directly if she was 97 percent positive her answer was correct.

The students, who were studying artificial intelligence, were unknowingly interacting with it. Goel didn’t inform them about Jill’s true identity until April 26. The student response was uniformly positive. One admitted her mind was blown. Another asked if Jill could “come out and play.” Since then some students have organized a KBAI alumni forum to learn about new developments with Jill after the class ends, and another group of students has launched an open source project to replicate her.

Back in February, student Tyson Bailey began to wonder if Jill was a computer and posted his suspicions on Piazza.

“We were taking an AI course, so I had to imagine that it was possible there might be an AI lurking around,” said Bailey, who lives in Albuquerque, New Mexico. “Then again, I asked Dr. Goel if he was a computer in one of my first email interactions with him. I think it’s a great idea and hope that they continue to improve it.”

Jill ended the semester able to answer many routine questions asked. She’ll return —with a different name — next semester. The goal is to have the virtual teaching assistant answer 40 percent of all questions by the end of year.

Learn more: Artificial Intelligence Course Creates AI Teaching Assistant

 

 

The Latest on: AI Teaching Assistant

via  Bing News

 

Widespread loss of ocean oxygen to become noticeable in 2030s

Deoxgenation due to climate change is already detectable in some parts of the ocean. New research from NCAR finds that it will likely become widespread between 2030 and 2040. Other parts of the ocean, shown in gray, will not have detectable loss of oxygen due to climate change even by 2100. (Image courtesy Matthew Long, NCAR

Deoxgenation due to climate change is already detectable in some parts of the ocean. New research from NCAR finds that it will likely become widespread between 2030 and 2040. Other parts of the ocean, shown in gray, will not have detectable loss of oxygen due to climate change even by 2100. (Image courtesy Matthew Long, NCAR

A reduction in the amount of oxygen dissolved in the oceans due to climate change is already discernible in some parts of the world and should be evident across large regions of the oceans between 2030 and 2040, according to a new study led by the National Center for Atmospheric Research (NCAR).

Scientists know that a warming climate can be expected to gradually sap the ocean of oxygen, leaving fish, crabs, squid, sea stars, and other marine life struggling to breathe. But it’s been difficult to determine whether this anticipated oxygen drain is already having a noticeable impact.

“Loss of oxygen in the ocean is one of the serious side effects of a warming atmosphere, and a major threat to marine life,” said NCAR scientist Matthew Long, lead author of the study. “Since oxygen concentrations in the ocean naturally vary depending on variations in winds and temperature at the surface, it’s been challenging to attribute any deoxygenation to climate change. This new study tells us when we can expect the impact from climate change to overwhelm the natural variability.”

The study is published in the journal Global Biogeochemical Cycles, a publication of the American Geophysical Union. The research was funded by the National Science Foundation, NCAR’s sponsor.

CUTTING THROUGH THE NATURAL VARIABILITY

The entire ocean—from the depths to the shallows—gets its oxygen supply from the surface, either directly from the atmosphere or from phytoplankton, which release oxygen into the water through photosynthesis.

Warming surface waters, however, absorb less oxygen. And in a double whammy, the oxygen that is absorbed has a more difficult time traveling deeper into the ocean. That’s because as water heats up, it expands, becoming lighter than the water below it and less likely to sink.

Thanks to natural warming and cooling, oxygen concentrations at the sea surface are constantly changing—and those changes can linger for years or even decades deeper in the ocean.

For example, an exceptionally cold winter in the North Pacific would allow the ocean surface to soak up a large amount of oxygen. Thanks to the natural circulation pattern, that oxygen would then be carried deeper into the ocean interior, where it might still be detectable years later as it travels along its flow path. On the flip side, unusually hot weather could lead to natural “dead zones” in the ocean, where fish and other marine life cannot survive.

To cut through this natural variability and investigate the impact of climate change, the research team—including Curtis Deutsch of the University of Washington and Taka Ito of Georgia Tech—relied on the NCAR-based Community Earth System Model, which is funded by the National Science Foundation and the U.S. Department of Energy.

The scientists used output from a project that ran the model more than two dozen times for the years 1920 to 2100 on the Yellowstone supercomputer, which is operated by NCAR. Each individual run was started with miniscule variations in air temperature. As the model runs progressed, those tiny differences grew and expanded, producing a set of climate simulations useful for studying questions about variability and change.

Using the simulations to study dissolved oxygen gave the researchers guidance on how much concentrations may have varied naturally in the past. With this information, they could determine when ocean deoxygenation due to climate change is likely to become more severe than at any point in the modeled historic range.

The research team found that deoxygenation caused by climate change could already be detected in the southern Indian Ocean and parts of the eastern tropical Pacific and Atlantic basins. They also determined that more widespread detection of deoxygenation caused by climate change would be possible between 2030 and 2040. However, in some parts of the ocean, including areas off the east coasts of Africa, Australia, and Southeast Asia, deoxygenation caused by climate change was not evident even by 2100.

PICKING OUT A GLOBAL PATTERN

The researchers also created a visual way to distinguish between deoxygenation caused by natural processes and deoxygenation caused by climate change.

Using the same model dataset, the scientists created maps of oxygen levels in the ocean, showing which waters were oxygen-rich at the same time that others were oxygen-poor. They found they could distinguish between oxygenation patterns caused by natural weather phenomena and the pattern caused by climate change.

The pattern caused by climate change also became evident in the model runs around 2030, adding confidence to the conclusion that widespread deoxygenation due to climate change will become detectable around that time.

The maps could also be useful resources for deciding where to place instruments to monitor ocean oxygen levels in the future to get the best picture of climate change impacts. Currently ocean oxygen measurements are relatively sparse.

“We need comprehensive and sustained observations of what’s going on in the ocean to compare with what we’re learning from our models and to understand the full impact of a changing climate,” Long said.

Learn more: WIDESPREAD LOSS OF OCEAN OXYGEN TO BECOME NOTICEABLE IN 2030S

 

 

The Latest on: Ocean oxygen

via  Bing News

 

Device “Fingerprints” Could Help Protect Power Grid, Other Industrial Systems

via Georgia Tech

via Georgia Tech

Human voices are individually recognizable because they’re generated by the unique components of each person’s voice box, pharynx, esophagus and other physical structures.

Researchers are using the same principle to identify devices on electrical grid control networks, using their unique electronic “voices” – fingerprints produced by the devices’ individual physical characteristics – to determine which signals are legitimate and which signals might be from attackers. A similar approach could also be used to protect networked industrial control systems in oil and gas refineries, manufacturing facilities, wastewater treatment plants and other critical industrial systems.

The research, reported February 23 at the Network and Distributed System Security Symposium in San Diego, was supported in part by the National Science Foundation (NSF). While device fingerprinting isn’t a complete solution in itself, the technique could help address the unique security challenges of the electrical grid and other cyber-physical systems. The approach has been successfully tested in two electrical substations.

“We have developed fingerprinting techniques that work together to protect various operations of the power grid to prevent or minimize spoofing of packets that could be injected to produce false data or false control commands into the system,” said Raheem Beyah, an associate professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. “This is the first technique that can passively fingerprint different devices that are part of critical infrastructure networks. We believe it can be used to significantly improve the security of the grid and other networks.”

The networked systems controlling the U.S. electrical grid and other industrial systems often lack the ability to run modern encryption and authentication systems, and the legacy systems connected to them were never designed for networked security. Because they are distributed around the country, often in remote areas, the systems are also difficult to update using the “patching” techniques common in computer networks. And on the electric grid, keeping the power on is a priority, so security can’t cause delays or shutdowns.

“The stakes are extremely high, but the systems are very different from home or office computer networks,” said Beyah. “It is critical that we secure these systems against attackers who may introduce false data or issue malicious commands.”

Beyah, his students, and colleagues in Georgia Tech’s George W. Woodruff School of Mechanical Engineering set out to develop security techniques that take advantage of the unique physical properties of the grid and the consistent type of operations that take place there.

For instance, control devices used in the power grid produce signals that are distinctive because of their unique physical configurations and compositions. Security devices listening to signals traversing the grid’s control systems can differentiate between these legitimate devices and signals produced by equipment that’s not part of the system.

Another aspect of the work takes advantage of simple physics. Devices such as circuit breakers and electrical protection systems can be told to open or close remotely, and they then report on the actions they’ve taken. The time required to open a breaker or a valve is determined by the physical properties of the device. If an acknowledgement arrives too soon after the command is issued – less time than it would take for a breaker or valve to open, for instance – the security system could suspect spoofing, Beyah explained.

To develop the device fingerprints, the researchers, including mechanical engineering assistant professor Jonathan Rogers, have built computer models of utility grid devices to understand how they operate. Information to build the models came from “black box” techniques – watching the information that goes into and out of the system – and “white box” techniques that utilize schematics or physical access to the systems.

“Device fingerprinting is a unique signature that indicates the identity of a specific device, or device type, or an action associated with that device type,” Beyah explained. “We can use physics and mathematics to analyze and build a model using first principles based on the devices themselves. Schematics and specifications allow us to determine how the devices are actually operating.”

The researchers have demonstrated the technique on two electrical substations, and plan to continue refining it until it becomes close to 100 percent accurate. Their current technique addresses the protocol used for more than half of the devices on the electrical grid, and future work will include examining application of the method to other protocols.

Because they also include devices with measurable physical properties, Beyah believes the approach could have broad application to securing industrial control systems used in manufacturing, oil and gas refining, wastewater treatment and other industries. Beyond industrial controls, the principle could also apply to the Internet of Things (IoT), where the devices being controlled have specific signatures related to switching them on and off.

“All of these IoT devices will be doing physical things, such as turning your air-conditioning on or off,” Beyah said. “There will be a physical action occurring, which is similar to what we have studied with valves and actuators.”

Learn more: Device “Fingerprints” Could Help Protect Power Grid, Other Industrial Systems

 

 

The Latest on: Electronic device fingerprints

via  Bing News

 

Using Stories to Teach Human Values to Artificial Agents

The Quixote system by researchers Mark Riedl and Brent Harrison teaches robots how to behave like the protagonist when interacting with humans and is as part of a larger effort to build an ethical value system into new forms of artificial intelligence.

The Quixote system by researchers Mark Riedl and Brent Harrison teaches robots how to behave like the protagonist when interacting with humans and is as part of a larger effort to build an ethical value system into new forms of artificial intelligence.

The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?

Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote” – to be unveiled at the AAAI-16 Conference in Phoenix, Ariz. (Feb. 12 – 17). Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.

“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” says Riedl, associate professor and director of the Entertainment Intelligence Lab. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”

Quixote is a technique for aligning an AI’s goals with human values by placing rewards on socially appropriate behavior. It builds upon Riedl’s prior research – the Scheherazade system – which demonstrated how artificial intelligence can gather a correct sequence of actions by crowdsourcing story plots from the Internet.

Scheherazade learns what is a normal or “correct” plot graph. It then passes that data structure along to Quixote, which converts it into a “reward signal” that reinforces certain behaviors and punishes other behaviors during trial-and-error learning. In essence, Quixote learns that it will be rewarded whenever it acts like the protagonist in a story instead of randomly or like the antagonist.

Learn more: Using Stories to Teach Human Values to Artificial Agents

 

 

The Latest on: Using Stories to Teach Human Values to Artificial Agents

via  Bing News