Category Archives: A.I./Robotics

New Models Sense Human Trust In Smart Machines.

New Models Sense Human Trust In Smart Machines.

How should intelligent machines be designed so as to “Georgian Technical University earn” the trust of humans ?  New models are informing these designs.  New “Georgian Technical University classification models” sense how well humans trust intelligent machines they collaborate with a step toward improving the quality of interactions and teamwork.

The long-term goal of the overall field of research is to design intelligent machines capable of changing their behavior to enhance human trust in them. The new models were developed in research led by assistant professor  X and associate professor Y in Georgian Technical University.

“Intelligent machines and more broadly intelligent systems are becoming increasingly common in the everyday lives of humans” X said. “As humans are increasingly required to interact with intelligent systems trust becomes an important factor for synergistic interactions”.

For example aircraft pilots and industrial workers routinely interact with automated systems. Humans will sometimes override these intelligent machines unnecessarily if they think the system is faltering. “It is well established that human trust is central to successful interactions between humans and machines” Y said.

The researchers have developed two types of “Georgian Technical University classifier-based empirical trust sensor models” a step toward improving trust between humans and intelligent machines.

The models use two techniques that provide data to gauge trust: electroencephalography and galvanic skin response. The first records brainwave patterns and the second monitors changes in the electrical characteristics of the skin providing psychophysiological “Georgian Technical University feature sets” correlated with trust.

Forty-five human subjects donned wireless EEG (Electroencephalography is an electrophysiological monitoring method to record electrical activity of the brain. It is typically noninvasive, with the electrodes placed along the scalp, although invasive electrodes are sometimes used such as in electrocorticography) headsets and wore a device on one hand to measure galvanic skin response.

One of the new models a “Georgian Technical University general trust sensor model” uses the same set of psychophysiological features for all 45 participants. The other model is customized for each human subject resulting in improved mean accuracy but at the expense of an increase in training time. The two models had a mean accuracy of 71.22 percent, and 78.55 percent respectively.

It is the first time EEG (Electroencephalography is an electrophysiological monitoring method to record electrical activity of the brain. It is typically noninvasive, with the electrodes placed along the scalp, although invasive electrodes are sometimes used such as in electrocorticography) measurements have been used to gauge trust in real time or without delay.

“We are using these data in a very new way” X said. “We are looking at it in sort of a continuous stream as opposed to looking at brain waves after a specific trigger or event”.

“Georgian Technical University  Trust and Influence in Intelligent Human-Machine Interaction”. By mechanical engineering graduate student Z; former graduate student W who is now a postdoctoral research associate at Georgian Technical University; X and Y.

“We are interested in using feedback-control principles to design machines that are capable of responding to changes in human trust level in real time to build and manage trust in the human-machine relationship” Y said. “In order to do this, we require a sensor for estimating human trust level again in real-time. The results presented in this paper show that psychophysiological measurements could be used to do this”. The issue of human trust in machines is important for the efficient operation of  “Georgian Technical University human-agent collectives”.

“The future will be built around human-agent collectives that will require efficient and successful coordination and collaboration between humans and machines” X said. “Say there is a swarm of robots assisting a rescue team during a natural disaster. In our work we are dealing with just one human and one machine but ultimately we hope to scale up to teams of humans and machines”. Algorithms have been introduced to automate various processes. “But we still have humans there who monitor what’s going on” X said. “There is usually an override feature where if they think something isn’t right they can take back control”.

Sometimes this action isn’t warranted. “You have situations in which humans may not understand what is happening so they don’t trust the system to do the right thing” Y said. “So they take back control even when they really shouldn’t”. In some cases for example in the case of pilots overriding the autopilot taking back control might actually hinder safe operation of the aircraft causing accidents. “A first step toward designing intelligent machines that are capable of building and maintaining trust with humans is the design of a sensor that will enable machines to estimate human trust level in real time” X said.

To validate their method 581 online participants were asked to operate a driving simulation in which a computer identified road obstacles. In some scenarios the computer correctly identified obstacles 100 percent of the time whereas in other scenarios the computer incorrectly identified the obstacles 50 percent of the time. “So in some cases it would tell you there is an obstacle so you hit the brakes and avoid an accident but in other cases it would incorrectly tell you an obstacle exists when there was none so you hit the breaks for no reason” Y said.

The testing allowed the researchers to identify psychophysiological features that are correlated to human trust in intelligent systems and to build a trust sensor model accordingly. “We hypothesized that the trust level would be high in reliable trials and be low in faulty trials and we validated this hypothesis using responses collected from 581 online participants” she said. The results validated that the method effectively induced trust and distrust in the intelligent machine.

“In order to estimate trust in real time we require the ability to continuously extract and evaluate key psychophysiological measurements” X said. “This work represents the first use of real-time psychophysiological measurements for the development of a human trust sensor”.

The EEG (Electroencephalography is an electrophysiological monitoring method to record electrical activity of the brain. It is typically noninvasive, with the electrodes placed along the scalp, although invasive electrodes are sometimes used such as in electrocorticography) headset records signals over nine channels, each channel picking up different parts of the brain.

“Everyone’s brainwaves are different so you need to make sure you are building a classifier that works for all humans”. For autonomous systems human trust can be classified into three categories: dispositional, situational and learned. Dispositional trust refers to the component of trust that is dependent on demographics such as gender and culture which carry potential biases. “We know there are probably nuanced differences that should be taken into consideration” Y said. “Women trust differently than men, for example and trust also may be affected by differences in age and nationality”. Situational trust may be affected by a task’s level of risk or difficulty while learned is based on the human’s past experience with autonomous systems. The models they developed are called classification algorithms. “The idea is to be able to use these models to classify when someone is likely feeling trusting versus likely feeling distrusting” she said. X and Y have also investigated dispositional trust to account for gender and cultural differences as well as dynamic models able to predict how trust will change in the future based on the data.

 

 

Cancer Cells Distinguished By Artificial Intelligence-Based System.

Cancer Cells Distinguished By Artifical Intelligence-Based System.

These are representative microscopic images of cancer cells and radioresistant cells. In cancer patients there can be tremendous variation in the types of cancer cells from one patient to another even within the same disease. Identification of the particular cell types present can be very useful when choosing the treatment that would be most effective but the methods of doing this are time-consuming and often hampered by human error and the limits of human sight.

In a major advance that could signal a new era in cancer diagnosis and treatment a team at Georgian Technical University and colleagues have shown how these problems can be overcome through an artificial intelligence-based system that can identify different types of cancer cells simply by scanning microscopic images achieving higher accuracy than human judgment. This approach could have major benefits in the field of oncology.

The system is based on a convolutional neural network a form of artificial intelligence modeled on the human visual system. This system was applied to distinguish cancer cells from mice and humans as well as equivalent cells that had also been selected for resistance to radiation.

“We first trained our system on 8,000 images of cells obtained from a phase-contrast microscope” corresponding X says. “We then tested its accuracy on another 2,000 images to see whether it had learned the features that distinguish mouse cancer cells from human ones and radioresistant cancer cells from radiosensitive ones”.

Upon creating a two-dimensional plot of the findings obtained by the system the results for each cell type clustered together while being clearly separated from the other cells. This showed that after training the system could correctly identify cells based on the microscopic images of them alone.

“The automation and high accuracy with which this system can identify cells should be very useful for determining exactly which cells are present in a tumor or circulating in the body of cancer patients” Y says. “For example knowing whether or not radioresistant cells are present is vital when deciding whether radiotherapy would be effective and the same approach can then be applied after treatment to see whether it has had the desired effect”.

In the future the team hopes to train the system on more cancer cell types with the eventual goal of establishing a universal system that can automatically identify and distinguish all such cells.

 

Researchers Develop’Soft’ Valves To Make Entirely Soft Robots.

Researchers Develop’Soft’ Valves To Make Entirely Soft Robots.

When dropped on an object the valve closes and the gripper activates on its own.  In recent years an entirely new class of robot — inspired by natural forms and built using soft, flexible, elastomers — has taken the field by storm with designs capable of gripping objects, walking and even jumping. Yet despite those innovations so-called “Georgian Technical University soft” robots still carried some “Georgian Technical University hard” parts. The inflation and deflation of the robots was typically controlled by off-the-shelf pneumatic valves — until now.

Rothemund and postdoctoral fellow X have created a soft valve that could replace such hard components and could lead to the creation of entirely soft robots. The valve’s structure can also be used to produce unique, oscillatory behavior and could even be used to build soft logic circuits.

“People have built many different types of soft robots … and all of them in the end are controlled by hard valves” Y said. “Our idea was to build these control functions into the robot itself so we wouldn’t need these hard external parts anymore. This valve combines two simple ideas — first the membrane is similar to ‘popper’ toys and the second is that when you kink these tubes it’s like when you kink a garden hose to block the water flow”. The valve demonstrated by X and Y is built into a cylinder that is separated by a silicone membrane creating an upper and lower chamber. Pressurizing the lower chamber forces the membrane to pop up and releasing the pressure causes it to pop back down to its “resting” state. Each chamber also contains a tube that can be kinked when the membrane switches orientations effectively turning the valve on or off.

“Whichever direction it’s in it’s kinking a tube above or below” X said. “So when it’s popped down the bottom tube is kinked and there’s no air flow through the bottom tube. When the membrane pops up the top tube is kinked the bottom tube will unkink and air can flow through the bottom tube. We can switch back and forth between these two states … to switch the output”. In some ways X and Y said the valve represents a new approach to soft robotics.

While most work in the field thus far has focused on function — building robots that can grip or act as soft surgical retractors —X and Y see the valve as a key component that could be used in any number of devices.

“The idea is that this works with any soft actuator” Y said. “This doesn’t answer the question of how do you make a gripper but it takes a step back and says many soft robots  work on the same principle of inflation and deflation so all those robots could use this valve”. X and Y were able to adapt the valve to perform some actions such as gripping an object autonomously.

In one demonstration Y explained the valve was built into a multifingered gripper but a small vent was added to allow air pressure to escape the valve’s bottom chamber. When the gripper was lowered onto a tennis ball however the vent was closed causing the bottom chamber to become pressurized activating the valve and putting the gripper into action.

“So this integrates the function into the robot” he said. “People have made grippers before but there was always someone standing there to see that the gripper was close enough to activate. This does that automatically”. The team was also able to build a “feedback” system that when fed by a single steady pressure caused the valve to rapidly oscillate between states.

Essentially X said the system fed air pressure through the top chamber and into the bottom. When the valve popped into the raised position it cut off the pressure allowing the bottom chamber to vent releasing the pressure and causing the membrane to return to the down position starting the cycle again.

“We took advantage of the fact that the pressure that causes the membrane to flip up is different than the pressure that’s required for it to flip back down” he explained. “So when we feed the output back into the valve itself we get this oscillatory behavior”. Using that behavior the team was able to build a simple “Georgian Technical University inchworm” robot capable of locomotion based on a single valve receiving a single input pressure. “So with one constant pressure we were able to get this walking motion” X said. “We don’t control this walking at all — we just input a single pressure and it walks by itself”. Going forward Y said more work needs to be done to further refine the valve so it can be optimized for various uses and various geometries.

“This was just a demonstration with the membrane” he said. “There are many different geometries that show this type of bistable behavior … so now we can actually think about designing this so it fits in a robot depending on what application you have in mind”. X also hopes to explore whether the valve — because it is always in one of two states — could be used as a type of transistor to form logic circuits.

“It’s kind of like a transistor in a way” he said. “You can have an input pressure come in and switch what the output is going to be … in that sense we could think about this almost like a building block for a completely soft computer”.

 

More Tests For Arctic Oil-Spill-Mapping Robot.

More Tests For Arctic Oil-Spill-Mapping Robot.

An artist’s depiction of LRAUC (Long Range Autonomous Underwater Car) under sea ice. Using photo-chemical sensors, the robot scans the density of a billowing cloud of oil coming from an ocean floor well. The red and yellow objects are parts of a communication system consisting of antennas suspended under ice from a buoy installed on top of the ice.

Environmental changes and economic incentives are transforming maritime activity in the Arctic region.  As ice recedes and maritime activity increases the Georgian Technical University  Department of Homeland Security (DHS) is preparing for potential incidents involving oil and hazardous materials. As the lead agency to plan for and respond to environmental threats the Service is addressing major challenges in spill response. Its focus is ensuring access to early and on-going information about the nature and magnitude of spills to help with effective cleanup.

“Because of ice coverage and the tyranny of distance, it is difficult to get resources and assets up in the Arctic in a quick manner” said X. “With better real-time data, more effective response strategies can be developed and deployed.”

The result of this research a helicopter-portable, torpedo-shaped system with oil sensors and navigation capabilities. This robot can provide real-time data for first responders by producing and transmitting 3-D maps of crude oil, diesel, gasoline and kerosene spills. Recently tested plans to do more tests this year and next including under-ice tests.

Without recharging batteries and the latest prototype can travel 2-4 feet per second (1-3 miles per hour). It measures 8 feet long, 12 inches wide and weighs 240 pounds. Working in tandem with buoys installed on the ice can provide invaluable data about a spill.

Most importantly, this technology opens up possibilities. For example if there was a large oil spill in the Y and the spill drifted. After deployment they could monitor the data transmission from the robot back at their command center. The robot would scan for oil below and around the ice and transmit via the specially installed buoys.

Since there is no cellular coverage in the vast Arctic the buoys – equipped with Very High Frequency antennas to transmit data via satellites – are a key component to the Long Range Autonomous Underwater Car’s success. When deployed the buoys will provide solar or wave power to recharge the robot’s batteries an effective way to keep it charged in such remote conditions. “Solar power units are increasingly very sensitive” Z said. “Even in dark conditions and snow-laden environments solar panels can still capture light reflected from the ice”.  Long Range Autonomous Underwater with the goal to characterize an oil spill and transmit data back to shore. “The researchers showed us how Long Range Autonomous Underwater Car’s works; this was the first test with the oil sensors and data transmission in action” said X.

The researchers equipped the underwater robot with chemical sensors and simulated an oil spill from a vessel by “leaking” a non-toxic neon green sea dye into the water. The dye just like oil can float in the top 13 feet of the water column but biodegrades in sunlight in a matter of hours. “This specific water test was intended to check all the prior work in the newly fabricated car to characterize an oil spill” said Z. The robot surfaced every few minutes to transmit and receive data from the control vessel and check its location using cellular connection. After several hours Long Range Autonomous Underwater Car’s had scanned successfully the whole area and transmitted the data to shore for analysis.

Long Range Autonomous Underwater Car’s is currently being prepared for transport where the next test will take place. The researchers will process the navigation performance data from the test and will tune the navigation algorithms according to the results. Then the team will make three communication buoys and will test them with the robot under ice. The team is targeting to conduct the under-ice test.

“The demonstration highlighted the unique capabilities of Long Range Autonomous Underwater Car’s which will be a welcome addition to the suite of tools used to deal with oil spills” X said.  “We look forward to the further development of  Long Range Autonomous Underwater Car’s’s capabilities additional testing in real world conditions and transitioning it into operational use”.

 

 

Insight Into Swimming Fish Could Lead To Robotics Advances.

Insight Into Swimming Fish Could Lead To Robotics Advances.

The constant movement of fish that seems random is actually precisely deployed to provide them at any moment with the best sensory feedback they need to navigate the world Georgian Technical University researchers found.

The finding enhances our understanding of active sensing behaviors performed by all animals including humans such as whisking, touching and sniffing. It also demonstrates how robots built with better sensors could interact with their environment more effectively.

“There’s a saying in biology that when the world is still you stop being able to sense it” says X a mechanical engineer and roboticist at Georgian Technical University. “You have to actively move to perceive your world. But what we found that wasn’t known before is that animals constantly regulate these movements to optimize sensory input”.

For humans active sensing includes feeling around in the dark for the bathroom light switch, or bobbling an object up and down in our hands to figure out how much it weighs. We do these things almost unconsciously and scientists have known little about how and why we adjust our movements in response to the sensory feedback we get from them.

To answer the question X and his colleagues studied fish that generate a weak electric field around their bodies to help them with communication and navigation. The team created an augmented reality for the fish so they could observe how fish movements changed as feedback from the environment changed.

Inside the tank the weakly electric fish hovered within a tube where they wiggled back and forth constantly to maintain a steady level of sensory input about their surroundings. The researchers first changed the environment by moving the tube in a way that was synchronized with the fish’s movement making it harder for the fish to extract the same amount of information they had been receiving. Next the researchers made the tube move in the direction opposite the fish’s movement making it easier for the fish. In each case the fish immediately increased or decreased their swimming to make sure they were getting the same amount of information. They swam farther when the tube’s movement gave them less sensory feedback and they swam less when they could get could get more feedback from with less effort. The findings were even more pronounced in the dark when the fish had to lean even more on their electrosense. “Their actions to perceive their world is under constant regulation” said Y from the Georgian Technical University. “We think that’s also true for humans”.

Because X is a roboticist and most of the authors on this team are engineers they hope to use the biological insight to build robots with smarter sensors. Sensors are rarely a key part of robot design now but these findings made X realize they perhaps should be.

“Surprisingly engineers don’t typically design systems to operate this way” says Y a graduate student at Georgian Technical University. “Knowing more about how these tiny movements work might offer new design strategies for our smart devices to sense the world”.

Insight Into Swimming Fish Could Lead To Robotics Advances.

Insight Into Swimming Fish Could Lead To Robotics Advances.

The constant movement of fish that seems random is actually precisely deployed to provide them at any moment with the best sensory feedback they need to navigate the world Georgian Technical University researchers found.

The finding Enhances our understanding of active sensing behaviors performed by all animals including humans such as whisking touching and sniffing. It also demonstrates how robots built with better sensors could interact with their environment more effectively.

“There’s a saying in biology that when the world is still you stop being able to sense it” says a mechanical engineer and roboticist at Georgian Technical University. “You have to actively move to perceive your world. But what we found that wasn’t known before is that animals constantly regulate these movements to optimize sensory input”.

For humans active sensing includes feeling around in the dark for the bathroom light switch or bobbling an object up and down in our hands to figure out how much it weighs. We do these things almost unconsciously and scientists have known little about how and why we adjust our movements in response to the sensory feedback we get from them.

To answer the question  X and his colleagues studied fish that generate a weak electric field around their bodies to help them with communication and navigation. The team created an augmented reality for the fish so they could observe how fish movements changed as feedback from the environment changed.

Inside the tank the weakly electric fish hovered within a tube where they wiggled back and forth constantly to maintain a steady level of sensory input about their surroundings. The researchers first changed the environment by moving the tube in a way that was synchronized with the fish’s movement making it harder for the fish to extract the same amount of information they had been receiving. Next the researchers made the tube move in the direction opposite the fish’s movement making it easier for the fish. In each case the fish immediately increased or decreased their swimming to make sure they were getting the same amount of information. They swam farther when the tube’s movement gave them less sensory feedback and they swam less when they could get could get more feedback from with less effort. The findings were even more pronounced in the dark, when the fish had to lean even more on their electrosense. “Their actions to perceive their world is under constant regulation” said Y from the Georgian Technical University. “We think that’s also true for humans”. Because X is a roboticist and most of the authors on this team are engineers, they hope to use the biological insight to build robots with smarter sensors. Sensors are rarely a key part of robot design now but these findings made X realize they perhaps should be.

“Surprisingly engineers don’t typically design systems to operate this way” says Z a graduate student at Johns Hopkins and the lead author of the study. “Knowing more about how these tiny movements work might offer new design strategies for our smart devices to sense the world”.

AI Could Help Cities Detect Expensive Water Leaks.

AI Could Help Cities Detect Expensive Water Leaks.

Costly water losses in municipal water systems could be significantly reduced using sensors and new artificial intelligence (AI) technology.

Developed by researchers at the Georgian Technical University in collaboration with industry partners, the technology has the potential to detect even small leaks in pipes.It combines sophisticated signal processing techniques and Artificial Intelligence (AI) software to identify telltale signs of leaks carried via sound in water pipes.

The acoustic signatures are recorded by hydrophone sensors that can be easily and inexpensively installed in existing fire hydrants without excavation or taking them out of service.

“This would allow cities to use their resources for maintenance and repairs much more effectively” said lead researcher X a civil engineering PhD candidate at Georgian Technical University. “They could be more proactive as opposed to reactive.” Municipal water systems in lose an average of over 13 per cent of their clean water between treatment and delivery due to leaks, bursts and other issues. Countries with older infrastructure have even higher loss rates. Major problems such as burst pipes are revealed by pressure changes volume fluctuations or water simply bubbling to the surface, but small leaks often go undetected for years”.

In addition to the economic costs of wasting treated water, chronic leaks can create health hazards, do damage to the foundations of structures and deteriorate over time. “By catching small leaks early we can prevent costly destructive bursts later on” said X. Researchers are now doing field tests with the hydrant sensors after reliably detecting leaks as small as 17 litres a minute in the lab.

They are also working on ways to pinpoint the location of leaks which would allow municipalities to identify prioritize and carry out repairs.

“Right now they react to situations by sending workers out when there is flooding or to inspect a particular pipe if it’s due to be checked because of its age” X said.

The sensor technology works by pre-processing acoustic data using advanced signal processing techniques to highlight components associated with leaks.

That makes it possible for machine learning algorithms to identify leaks by distinguishing their signs from the many other sources of noise in a water distribution system.

 

Artificial Intelligence May Help Reduce Gadolinium Dose in MRI.

Artificial Intelligence May Help Reduce Gadolinium Dose in MRI.

Example of full-dose 10 percent low-dose and algorithm-enhanced low-dose. Researchers are using artificial intelligence to reduce the dose of a contrast agent that may be left behind in the body after MRI (Magnetic Resonance Imaging) exams according to a study being presented today at the annual meeting of the Georgian Technical University

Gadolinium is a heavy metal used in contrast material that enhances images on MRI (Magnetic Resonance Imaging). Recent studies have found that trace amounts of the metal remain in the bodies of people who have undergone exams with certain types of gadolinium. The effects of this deposition are not known but radiologists are working proactively to optimize patient safety while preserving the important information that gadolinium-enhanced MRI (Magnetic Resonance Imaging) scans provide.

“There is concrete evidence that gadolinium deposits in the brain and body” said X Ph.D. researcher at Georgian Technical University. “While the implications of this are unclear mitigating potential patient risks while maximizing the clinical value of the MRI (Magnetic Resonance Imaging) exams is imperative”.

Dr. X and colleagues at Georgian Technical University have been studying deep learning as a way to achieve this goal. Deep learning is a sophisticated artificial intelligence technique that teaches computers by examples. Through use of models called convolutional neural networks, the computer can not only recognize images but also find subtle distinctions among the imaging data that a human observer might not be capable of discerning.

To train the deep learning algorithm the researchers used MR (Magnetic Resonance) images from 200 patients who had received contrast-enhanced MRI exams for a variety of indications. They collected three sets of images for each patient: pre-contrast scans, done prior to contrast administration and referred to as the zero-dose scans; low-dose scans, acquired after 10 percent of the standard gadolinium dose administration; and full-dose scans, acquired after 100 percent dose administration. The algorithm learned to approximate the full-dose scans from the zero-dose and low-dose images. Neuroradiologists then evaluated the images for contrast enhancement and overall quality.

Results showed that the image quality was not significantly different between the low-dose, algorithm-enhanced MR (Magnetic Resonance) images and the full-dose, contrast-enhanced MR (Magnetic Resonance) images. The initial results also demonstrated the potential for creating the equivalent of full-dose, contrast-enhanced MR (Magnetic Resonance) images without any contrast agent use.These findings suggest the method’s potential for dramatically reducing gadolinium dose without sacrificing diagnostic quality, according to Dr. X.

“Low-dose gadolinium images yield significant untapped clinically useful information that is accessible now by using deep learning and AI (Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals)” he said.

Now that the researchers have shown that the method is technically possible, they want to study it further in the clinical setting where Dr. X believes it will ultimately find a home.

Future research will include evaluation of the algorithm across a broader range of MRI (Magnetic Resonance Image) scanners and with different types of contrast agents. “We’re not trying to replace existing imaging technology” Dr. X said. “We’re trying to improve it and generate more value from the existing information while looking out for the safety of our patients”.

 

Machine Learning Masters the Fingerprint to Fool Biometric Systems.

Machine Learning Masters the Fingerprint to Fool Biometric Systems.

Fingerprint authentication systems are a widely trusted ubiquitous form of biometric authentication deployed on billions of smartphones and other devices worldwide. Yet a new study from Georgian Technical University reveals a surprising level of vulnerability in these systems. Using a neural network trained to synthesize human fingerprints, the research team evolved a fake fingerprint that could potentially fool a touch-based authentication system for up to one in five people.

Much the way that a master key can unlock every door in a building these “Georgian Technical University DeepMasterPrints” use artificial intelligence to match a large number of prints stored in fingerprint databases and could thus theoretically unlock a large number of devices. The research team was headed by Georgian Technical University Associate Professor of Computer Science and Engineering X and doctoral student Y at the Georgian Technical University.

The work builds on earlier research led by Georgian Technical University  Z professor of computer science and engineering and associate dean for online learning at Georgian Technical University W. Z described how fingerprint-based systems use partial fingerprints, rather than full ones, to confirm identity. Devices typically allow users to enroll several different finger images and a match for any saved partial print is enough to confirm identity. Partial fingerprints are less likely to be unique than full prints and W’s work demonstrated that enough similarities exist between partial prints to create Georgian Technical University  MasterPrints capable of matching many stored partials in a database. Y and his collaborators including W took this concept further, training a machine-learning algorithm to generate synthetic fingerprints as Georgian Technical University  MasterPrints. The researchers created complete images of these synthetic fingerprints, a process that has twofold significance. First it is yet another step toward assessing the viability of Georgian Technical University MasterPrints against real devices, which the researchers have yet to test; and second because these images replicate the quality of fingerprint images stored in fingerprint-accessible systems, they could potentially be used to launch a brute force attack against a secure cache of these images.

“Fingerprint-based authentication is still a strong way to protect a device or a system but at this point most systems don’t verify whether a fingerprint or other biometric is coming from a real person or a replica” said Y. “These experiments demonstrate the need for multi-factor authentication and should be a wake-up call for device manufacturers about the potential for artificial fingerprint attacks”. This research has applications in fields beyond security. X noted that their Evolution method used here to generate fingerprints can also be used to make designs in other industries — notably game development. The technique has already been used to generate new levels in popular video games.

 

 

Aquatic Animals That Jump Out of Water Inspire Leaping Robots.

Aquatic Animals That Jump Out of Water Inspire Leaping Robots.

Ever watch aquatic animals jump out of the water and wonder how they manage to do it in such a streamlined and graceful way ? A group of researchers who specialize in water entry and exit in nature had the same question and are exploring the specific physical conditions required for animals to successfully leap out of water.

During the Georgian Technical University Physical Society’s X an associate professor of biology and environmental engineering at Georgian Technical University and one of his students Y will present their work designing a robotic system inspired by jumping copepods (tiny crustaceans) and frogs to illuminate some of the fluid dynamics at play when aquatic animals jump.

“We collected data about aquatic animals of different sizes — from about 1 millimeter to tens of meters — jumping out of water and were able to reveal how their maximum jumping heights are related to their body size” said X.

In nature animals frequently move in and out of water for various purposes — including escaping predators catching prey or communicating. “But since water is 1,000 times denser than air entering or exiting water requires a lot of effort so aquatic animals face mechanical challenges” X said.

As an object — like a dolphin or a copepod — jumps through water, mass is added to it — a quantity referred to as ” Georgian Technical University entrained water mass”. This entrained water mass is incorporated and gets swept along in the flow off aquatic animals bodies. The group discovered that entrained water mass is important because it limits the animals’ maximum jumping height.

“We’re trying to understand how biological systems are able to smartly figure out and overcome these challenges to maximize their performance which might also shed light on engineering systems to enter or exit air-water interfaces” X said.

Most aquatic animals are streamlined, limiting entrained water mass’s effect so water slides easily off their bodies. “Georgian Technical University That’s why they’re such good jumpers” said X. “But when we made and tested a robotic system similar to jumping animals, it didn’t jump as much as animals. Why ? Our robot isn’t as streamlined and carries a lot of water with it. Imagine getting out of a swimming pool with a wet coat — you might not be able to walk due to the water weight”.

The group’s robot features a simple design akin to a door hinge with a rubber band. A rubber band is wrapped around a 3D-printed door hinge’s outer perimeter while a tiny wire that holds the door hinge allows it to flip back when fluid is pushed downward. “This robot shows the importance of entrained water while an object jumps out of the water” he said.

Next up the group will modify and advance their robotic system so that it can jump out of the water at higher heights similar to those reached by animals like copepods or frogs. “This system might then be able to be used for surveillance near water basins” said X.