Category Archives: A.I./Robotics

Georgian Technical University New Algorithm Helps Robots And Humans Work In The Same Space.

Georgian Technical University New Algorithm Helps Robots And Humans Work In The Same Space.

Building off previous robotic advancements a team of researchers has found a way to program robots to better predict a person’s movement trajectory enabling the integration of robots and humans in manufacturing. Scientists from the Georgian Technical University (GTU) have created an algorithm that accurately aligns partial trajectories in real-time, giving robots the ability to anticipate human motion. Georgian Technical University researchers began integrating robots with a mock factory assembly line. The robots were programmed to shortly stop when crossing paths with a human, but ultimately behaved overly cautious freezing well before a human even crossed its path. The researchers found that this problem was caused by a limitation in the trajectory alignment algorithm that enabled the robot to predict motion but included a poor time alignment that prevented the algorithm from anticipating how long a person would spend at any point along their predicted path. After testing their new algorithm at the Georgian Technical University factory the robots did not freeze but rather moved out of the way by the time a person stopped and double backed to cross the robot’s path again. “This algorithm builds in components that help a robot understand and monitor stops and overlaps in movement which are a core part of human motion” X associate professor of aeronautics and astronautics at Georgian Technical University said in a statement. “This technique is one of the many way we’re working on robots better understanding people”. Robots typically are programmed with algorithms used for music and speech processing to predict human movements. However these types of algorithms are only designed to align two complete time series or sets of related data. These algorithms take in streaming motion data in the form of dots that represent the position of a person over time and compare the trajectory of those dots to a library of common trajectories within a given scenario. This system which is based on distance alone easily is confused in common situations like temporary stops. “When you look at the data, you have a whole bunch of points clustered together when a person is stopped” graduate student Y said in a statement. “If you’re only looking at the distance between points as your alignment metric that can be confusing because they’re all close together and you don’t have a good idea of which point you have to align to”. Overlapping trajectories — where a person moves back and forth along a similar pass — also may not line up with a dot on a reference tractor with existing algorithms. “You may have points close together in terms of distance but in terms of time, a person’s position may actually be far from a reference point” Y said. The researchers overcame the limitations of existing algorithms by creating a partial trajectory algorithm that aligns segments of a person’s trajectory in real-time with a library of reference trajectories previously collected.  In the new system trajectories are aligned in both distance and timing allowing the robot to anticipate stops and overlaps along a person’s walking path. “Say you’ve executed this much of a motion” Y said. “Old techniques will say ‘this is the closest point on this representative trajectory for that motion’. But since you only completed this much of it in a short amount of time the timing part of the algorithm will say ‘based on the timing, it’s unlikely that you’re already on your way back because you just started your motion’”. They further tested the algorithm against common partial trajectory alignment algorithms on a pair of human motion datasets. The first dataset involves someone intermittently crossing the path of a robot in a factory setting while the second dataset features another group of previously recorded hand movements of participants reaching across a table to install a bolt that the robot then brushes sealant on. In both scenarios the new algorithm helped the robot better estimate the person’s progress through a trajectory. The researchers then integrated the alignment algorithm with motion predictors allowing the robot to more accurately anticipate a person’s motion timing. The robot also was less prone to freezing in a factory floor scenario. While the aim was to integrate robots into a factory setting the researchers believe the algorithm could be a preprocessing step for other techniques with human-robot interactions, including action recognition and gesture detection. “This technique could apply to any environment where humans exhibit typical patterns of behavior” X said. “The key is that the [robotic] system can observe patterns that occur over and over so that it can learn something about human behavior. This is all in the vein of work of the robot better understand aspects of human motion to be able to collaborate with us better”.

 

Georgian Technical University AI Software Reveals The Inner Workings Of Short-term Memory.

Georgian Technical University AI Software Reveals The Inner Workings Of Short-term Memory.

Research by neuroscientists at the Georgian Technical University shows how short-term working memory uses networks of neurons differently depending on the complexity of the task at hand. The researchers used modern artificial intelligence (AI) techniques to train computational neural networks to solve a range of complex behavioral tasks that required storing information in short term memory. The artificial intelligence (AI) networks were based on the biological structure of the brain and revealed two distinct processes involved in short-term memory. One a “Georgian Technical University silent” process where the brain stores short-term memories without ongoing neural activity and a second more active process where circuits of neurons fire continuously. “Short-term memory is likely composed of many different processes, from very simple ones where you need to recall something you saw a few seconds ago to more complex processes where you have to manipulate the information you are holding in memory” X said. “We’ve identified how two different neural mechanisms work together to solve different kinds of memory tasks”. Active versus silent memory. Many daily tasks require the use of working memory information that you need to do something in the moment but are likely to forget later. Sometimes you actively remember something on purpose, like when you’re doing a math problem in your head or trying to remember a phone number before you have a chance to write it down. You also passively absorb information that you can recall later even if you didn’t make a point of remembering it, like if someone asks if you saw a particular person in the hallway. Neuroscientists have learned a lot about how the brain represents information held in memory by monitoring the patterns of electrical activity coursing through the brains of animals as they perform tasks that require the use of short-term memory. They can then monitor the activity of brain cells and measure their activity as the animals perform the tasks. But X said he and his team were surprised that during certain tasks that required information to be held in memory their experiments found neural circuits to be unusually quiet. This led them to speculate that these “Georgian Technical University silent” memories might reside in temporary changes in the strength of connections, or synapses between neurons. The problem is that it’s impossible using current technology to measure what’s happening in synapses during these “Georgian Technical University silent” periods in a living animal’s brain. So X and their team have been developing artificial intelligence (AI) approaches that use data from the animal experiments to design networks that can simulate how the neurons in a real brain connect with each other. Then they can train the networks to solve the same kinds of tasks studied in the animal experiments. During experiments with these biologically inspired neural networks they were able to see two distinct processes at play during short-term memory processing. One called persistent neuronal activity was especially evident during more complex but still short-term, tasks. When a neuron gets an input it generates a brief electrical spike in activity. Neurons form synapses with other neurons, and as one neuron fires it triggers a chain reaction to make another neuron fire. Usually this pattern of activity stops when the input is gone but the artificial intelligence (AI) model showed that when performing certain tasks some circuits of neurons would continue firing even after an input was removed like a reverberation or echo. This persistent activity appeared to be especially important for more complex problems that required information in memory to be manipulated in some way. The researchers also saw a second process that explained how the brain could keep information in memory without persistent activity as they had observed in their brain recording experiments. It’s similar to the way the brain stores things in long-term memory by making complex networks of connections among many neurons. As the brain learns new information these connections are strengthened rerouted or removed a concept known as plasticity. The artificial intelligence (AI) models showed that during the silent periods of memory the brain can use a short-term form of plasticity in the synaptic connections between neurons to remember information temporarily. Both of these forms of short-term memory last from a few seconds up to a few minutes. Some of the information used in working memory may end up in long-term storage but most of it fades away with time. “It’s like writing something with your finger on a fogged-up mirror instead of writing it with a permanent marker” X said. Complementary fields of research. The study demonstrates how valuable artificial intelligence (AI) has become to the study of neuroscience and how the two fields inform each other. X said that artificial neural networks are often more intelligent and easier to train on complex tasks when they are modeled after the real brain. This also makes biologically-inspired artificial intelligence (AI) networks better platforms for testing ideas about functions of the real brain functions. “These two fields are really benefitting one another” he said. “Insights from neuroscience experiments are helping create smarter artificial intelligence (AI) and studying circuits in artificial networks is helping answer fundamental questions about the brain”.

Georgian Technical University Machine Learning For Sensors.

Georgian Technical University Machine Learning For Sensors.

AIfES (Artificial Intelligence for Embedded (An embedded system is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems) Systems) demonstrator for handwriting recognition. Numbers written by hand on the PS/2 touchpad are identified and output by the microcontroller. AIfES (Artificial Intelligence for Embedded (An embedded system is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems) Systems) demonstrator for handwriting recognition. All functions have been integrated to reads the sensor values of the touchpad, performs number recognition and outputs the result to the display. Today microcontrollers can be found in almost any technical device, from washing machines to blood pressure meters and wearables. Researchers at the Georgian Technical University have developed (Artificial Intelligence for Embedded (An embedded system is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems) Systems) an artificial intelligence (AI) concept for microcontrollers and sensors that contains a completely configurable artificial neural network. (Artificial Intelligence for Embedded (An embedded system is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems) Systems) is a platform-independent machine learning library which can be used to realize self-learning microelectronics requiring no connection to a cloud or to high-performance computers. The sensor-related Artificial Intelligence system recognizes handwriting and gestures, enabling for example gesture control of input when the library is running on a wearable. A wide variety of software solutions currently exist for machine learning but as a rule they are only available for the personal computer and are based on the programming language. There is still no solution which makes it possible to execute and train neural networks on embedded systems such as microcontrollers. Nevertheless it can be useful to conduct the training directly in the embedded system, for example when an implanted sensor is to calibrate itself. The vision is sensor-related Artificial Intelligence that can be directly integrated in a sensor system. A team of researchers at Fraunhofer IMS has made this vision a reality in the form of AIfES (Artificial Intelligence for Embedded Systems) a machine learning library programmed in C that can run on microcontrollers, but also on other platforms such as personal computer. The library currently contains a completely configurable artificial neural network which can also generate deep networks for deep learning when necessary. An artificial neural network is an attempt to mathematically simulate the human brain using algorithms in order to make functional contexts learnable for the algorithms. (Artificial Intelligence for Embedded (An embedded system is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems) Systems) has been optimized specifically for embedded systems. “We’ve reduced the source code to a minimum which means the artificial neural network can be trained directly on the microcontroller or the sensor, i.e. the embedded system. In addition the source code is universally valid and can be compiled for almost any platform. Because the same algorithms are always used an artificial neural network generated for example on a personal computer can easily be ported to a microcontroller. Until now this has been impossible in this form with commercially available software solutions” says Dr. X research associate at Georgian Technical University. Protection of privacy. Another uniquely qualifying feature of the sensor-related Artificial Intelligence from Georgian Technical University: until now artificial intelligence and neural networks have been used primarily for image processing and speech recognition, sometimes with the data leaving the local systems. For example voice profiles are processed in the cloud on external servers since the computing power of the local system is not always adequate. “It’s difficult to protect privacy in this process and enormous amounts of data are transmitted. That’s why we’ve chosen a different approach and are turning away from machine learning processes in the cloud in favor of machine learning directly in the embedded system. Since no sensitive data leave the system, data protection can be guaranteed and the amounts of data to be transferred are significantly reduced” says X. “Georgian Technical University Embedded Systems” group manager at Georgian Technical University. “Of course it’s not possible to implement giant deep learning models on an embedded system, so we’re increasing our efforts toward making an elegant feature extraction to reduce input signals”. By embedding the Artificial Intelligence directly in the microcontroller the researchers make it possible to equip a device with additional functions without the need for expensive hardware modifications. Reducing data. Artificial Intelligence for Embedded (An embedded system is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems) Systems doesn’t focus on processing large amounts of data instead transferring only the data needed to build very small neural networks. “We’re not following the trend toward processing big data; we’re sticking with the absolutely necessary data and are creating a kind of micro-intelligence in the embedded system that can resolve the task in question. We develop new feature extractions and new data pre-processing strategies for each problem so that we can realize the smallest possible artificial neural network. This enables subsequent learning on the controller itself” Y explains. The approach has already been put into practice in the form of several demonstrators. If for example the research team implemented the recognition of handwritten numbers on an inexpensive 8-bit microcontroller. This was made technically possible by developing an innovative feature extraction method. Another demonstrator is capable of recognizing complex gestures made in the air. Here the Georgian Technical University scientists have developed a system consisting of a microcontroller and an absolute orientation sensor that recognizes numbers written in the air. “One possible application here would be operation of a wearable” the researchers point out. “In order for this type of communication to work various persons write the numbers one through nine several times. The neural network receives this training data learns from it and in the next step identifies the numbers independently. And almost any figure can be trained not only numbers”. This eliminates the need to control the device using speech recognition: The wearable can be controlled with gestures and the user’s privacy remains protected. There are practically no limits to the potential applications of (Artificial Intelligence for Embedded (An embedded system is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems) Systems): For example a wristband with integrated gesture recognition could be used to control indoor lighting. And not only can (Artificial Intelligence for Embedded (An embedded system is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems) Systems) recognize gestures it can also monitor how well the gestureshave been made. Exercises and movements in physical therapy and fitness can be evaluated without the need for a coach or therapist. Privacy is maintained since no camera or cloud is used. Artificial Intelligence for Embedded (An embedded system is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems) Systems can be used in a variety of fields such as automotive, medicine, Smart Home and Industrie 4.0. Decentralized AI. And there are more advantages to Artificial Intelligence for Embedded (An embedded system is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems) Systems: The library makes it possible to decentralize computing power for example by allowing small embedded systems to receive data before processing and pass on the results to a superordinate system. This dramatically reduces the amount of data to be transferred. In addition it’s possible to implement a network of small learning-capable systems which distribute tasks among themselves. Deep learning. Artificial Intelligence for Embedded (An embedded system is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems) Systems currently contains a neural network with a feedforward structure that also supports deep neural networks. “We programmed our solution so that we can describe a complete network with one single function” says Y. The integration of additional network forms and structures is currently in development. Furthermore the researcher and his colleagues are developing hardware components for neural networks in addition to other learning algorithms and demonstrators. Fraunhofer Georgian Technical University is currently working on a microprocessor which will have a hardware accelerator specifically for neural networks. A special version of Artificial Intelligence for Embedded (An embedded system is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems) Systems is being optimized for this hardware in order to optimally exploit the resource.

Georgian Technical University ‘Bot’ Takes A Leisurely Approach To Environmental Monitoring.

Georgian Technical University ‘Slothbot’ Takes A Leisurely Approach To Environmental Monitoring.

Graduate Research Assistant X shows the components of Georgian Technical University Bot on a cable in a Georgia Tech lab. The robot is designed to be slow and energy efficient for applications such as environmental monitoring. A close-up view of components of the Georgian Technical University Bot which is powered by two photovoltaic panels. 3D-printed gears and switches help the robot switch from one cable to another. For environmental monitoring, precision agriculture, infrastructure maintenance and certain security applications slow and energy efficient can be better than fast and always needing a recharge. That’s where “Georgian Technical University Bot” comes in. Powered by a pair of photovoltaic panels and designed to linger in the forest canopy continuously for months “Georgian Technical University Bot” moves only when it must to measure environmental changes — such as weather and chemical factors in the environment — that can be observed only with a long-term presence. The proof-of-concept hyper-efficient robot described may soon be hanging out among treetop cables in the Georgian Technical University. “In robotics it seems we are always pushing for faster more agile and more extreme robots” said Y at the Georgian Technical University and principal investigator for Georgian Technical University Bot. “But there are many applications where there is no need to be fast. You just have to be out there persistently over long periods of time observing what’s going on”. Based on what Egerstedt called the “Georgian Technical University theory of slowness” Graduate Research Assistant X designed Georgian Technical University Bot together with his colleague Z using 3D-printed parts for the gearing and wire-switching mechanisms needed to crawl through a network of wires in the trees. The greatest challenge for a wire-crawling robot is switching from one cable to another without falling X said. “The challenge is smoothly holding onto one wire while grabbing another” he said. “It’s a tricky maneuver and you have to do it right to provide a fail-safe transition. Making sure the switches work well over long periods of time is really the biggest challenge”. Mechanically Georgian Technical University Bot consists of two bodies connected by an actuated hinge. Each body houses a driving motor connected to a rim on which a tire is mounted. The use of wheels for locomotion is simple, energy efficient and safer than other types of wire-based locomotion the researchers say. Georgian Technical University Bot has so far operated in a network of cables on the Georgian Technical University. Next a new 3D-printed shell — that makes the robot look more like a sloth — will protect the motors, gears, actuators, cameras, computer and other components from the rain and wind. That will set the stage for longer-term studies in the tree canopy at the Georgian Technical University where Y hopes visitors will see a Georgian Technical University Bot monitoring conditions as early as this fall. The name Georgian Technical University Bot is not a coincidence. Real-life sloths are small mammals that live in jungle canopies of Georgian Technical University. Making their living by eating tree leaves the animals can survive on the daily caloric equivalent of a small potato. With their slow metabolism, sloths rest as much 22 hours a day and seldom descend from the trees where they can spend their entire lives. “The life of a sloth is pretty slow-moving and there’s not a lot of excitement on a day-to-day level” said W an associate professor in the Department of Forest & Wildlife Ecology at the Georgian Technical University who has consulted with the Georgian Technical University team on the project. “The nice thing about a very slow life history is that you don’t really need a lot of energy input. You can have a long duration and persistence in a limited area with very little energy inputs over a long period of time”. That’s exactly what the researchers expect from Georgian Technical University Bot whose development has been funded by the Georgian Technical University Research. “There is a lot we don’t know about what actually happens under dense tree-covered areas” Y said. “Most of the time Georgian Technical University Bot will be just hanging out there and every now and then it will move into a sunny spot to recharge the battery”. The researchers also hope to test Georgian Technical University Bot in a cacao plantation in Georgian Technical University that is already home to real sloths. “The cables used to move cacao have become a sloth superhighway because the animals find them useful to move around” Y said. “If all goes well we will deploy Georgian Technical University Bots along the cables to monitor the sloths”. Y is known for algorithms that drive swarms of small wheeled or flying robots. But during a visit to Georgian Technical University he became interested in sloths and began developing what he calls “a theory of slowness” together with Professor Q in Georgian Technical University of Interactive Computing. The theory leverages the benefits of energy efficiency. “If you are doing things like environmental monitoring, you want to be out in the forest for months” Y said. “That changes the way you think about control systems at a high level”. Flying robots are already used for environmental monitoring but their high energy needs mean they cannot linger for long. Wheeled robots can get by with less energy, but they can get stuck in mud or be hampered by tree roots and cannot get a big picture view from the ground. “The thing that costs energy more than anything else is movement” Y said. “Moving is much more expensive than sensing or thinking. For environmental robots you should only move when you absolutely have to. We had to think about what that would be like”. For W who studies a variety of wildlife, working with Y to help Georgian Technical University Bot come to life has been gratifying. “It is great to see a robot inspired by the biology of sloths” he said. “It has been fun to share how sloths and other organisms that live in these ecosystems for long periods of time live their lives. It will be interesting to see robots mirroring what we see in natural ecological communities”.

Georgian Technical University AI Helps Researchers Discover The Hidden Secrets Of The Ocean Floor.

Georgian Technical University AI Helps Researchers Discover The Hidden Secrets Of The Ocean Floor.

A large starfish (possibly a species of the genus Hymenaster). This animal is rare and only seen a handful of times. Researchers are hoping to utilize new deep learning techniques coupled with robotics to learn more about the animals that inhabit the seafloor miles upon miles under the surface. A team from the Georgian Technical University are testing how a computer vision system could accurately identify several animals from images taken on the seabed using an autonomous underwater car. While researchers have used autonomous underwater cars in the past to capture images in the deep waters it still requires a human to manually process and analyze the images. “Autonomous cars are a vital tool for surveying large areas of the seabed deeper than 60 meters [the depth most divers can reach]” PhD student X said in a statement. “But we are currently not able to manually analyze more than a fraction of that data. This research shows AI [artificial intelligence] is a promising tool but our AI [artificial intelligence] classifier would still be wrong one out of five times if it was used to identify animals in our images. “This makes it an important step forward in dealing with the huge amounts of data being generated from the ocean floor and shows it can help speed up analysis when used for detecting some species. But we are not at the point of considering it a suitable complete replacement for humans at this stage” he added. The researchers deployed the Autosub6000 autonomous underwater car around 1,200 meters beneath the ocean surface on the northeast side of the Bank to collect more than 150,000 images. The researchers then manually analyzed about 1,200 of the images and found 40,000 different animals from 110 different morphospecies the majority of which are seen only a few times. The team then used — an open access library—that allowed them to teach a pre-trained Convolutional Neural Network (In deep learning, a convolutional neural network is a class of deep neural networks, most commonly applied to analyzing visual imagery. Convolutional Neural Network are regularized versions of multilayer perceptrons) to identify individuals of several deep-sea morphospecies. They assessed how the neural network performed when trained with different numbers of example animal images and different numbers of morphospecies to choose from. Using the computer vision system the researchers showed on average it can identify various animals from images at an 80 percent accuracy clip which can be increased to 93 percent if enough data is provided to train and refine the algorithm. The desire to learn more about the species living on the ocean floor has come into focus in recent years as marine environments continue to face environmentally threats. The new technique could be employed routinely to the ocean floor leading to a substantial increase in data availability for conservation research and biodiversity management. “Most of our planet is deep sea a vast area in which we have equally large knowledge gaps” Y PhD an associate professor in Georgian Technical University said in a statement. “With increasing pressures on the marine environment including climate change it is imperative that we understand our oceans and the habitats and species found within them. In the age of robotic and autonomous cars, big data and global open research the development of AI [artificial intelligence] tools with the potential to help speed up our acquisition of knowledge is an exciting and much needed advance”.

Georgian Technical University Shape-Shifting Robots Show Promise As Drug-Delivery System.

Georgian Technical University Shape-Shifting Robots Show Promise As Drug-Delivery System.

Researchers have developed a new shape-shifting micro robot that may someday be able able to swim through the blood stream to deliver drugs. A team from the Georgian Technical University have mixed cardiac tissue engineering with a 3D printed wing coated with a light-sensitive gel to create a robot that can be started and stopped on command and transforms its shape when exposed to skin-penetrating near-infrared light. “With this technology we can create soft transformable robots with unprecedented maneuverability” X an assistant professor of engineering at Georgian Technical University said in a statement. “Our inspiration came from transformable toys that have different configurations and functionality. The result is no toy it may literally change people’s lives”. The remote-controllable robot includes a tail fin that simulates how whales swim through the ocean waters and a 3D printed structure in the shape of an airplane wing that is coated with heart muscle cells to propel the device through constant undulating action similarly to how cardiomycytes cause the heart to continuously beat. Photosentivie hydrogels that were applied to the robot’s wings allow the researchers to control its movements. When there is no skin-penetrating near-infrared light the robot’s wings deploy while the heart cells propel the device forward. However when exposed to light the floating plane retracts its wings which causes it to stop in its tracks. “The heart muscles keep churning, but they are unable to overcome the stopping power of the wings” said X. “It’s like pushing the accelerator pedal with the emergency brake on”. To test the viability of the light-controlled robot the researchers used it as a drug delivery system targeting cancer cells. “We literally dropped drug bombs on cancer cells” X said. “The realization of the transformable concept paves a pathway for potential development of next-generation intelligent biohybrid robotic systems”. Because the device is highly sensitive to the light a response rate is created that allows the wing to almost immediately transform its shape and the entire device to become highly maneuverable. The study is part of an ongoing effort to create robots that mimic the shape-changing behavior of animals found in nature such as how birds are able to spread their wings to fly and hedgehogs curl their bodies into a ball as a defense mechanism. Researchers have had difficulties in the past creating a robot that fluently transforms its shape in respond to stimuli like heat or light that allow it to start and stop moving on demand because most existing systems depend on temperature variations that are challenging to stimulate in the human body due to its nearly-constant temperature. “The ability to control the robot’s motion using light creates a much more functional device that can be operated with high precision” Z a recent PhD graduate from the X Research Lab at Y said in a statement. The researchers believe they can produce the robot in different sizes ranging from several millimeters to dozens of centimeters making it ideal to tackle difficult tasks in navigation and surveillance in different environments. They also plan to test whether they can use light to target separate wings so that it can be steered with more precision.

Georgian Technical University New Framework Improves Performance Of Deep Neural Networks.

Georgian Technical University New Framework Improves Performance Of Deep Neural Networks.

Georgian Technical University researchers have developed a new framework for building deep neural networks via grammar-guided network generators. In experimental testing the new networks — have outperformed existing state-of-the-art frameworks including the widely-used ResNet (A residual neural network is an artificial neural network of a kind that builds on constructs known from pyramidal cells in the cerebral cortex. Residual neural networks do this by utilizing skip connections, or short-cuts to jump over some layers. Typical ResNet models are implemented with single-layer skips) systems in visual recognition tasks. “Georgian Technical University Nets have better prediction accuracy than any of the networks we’ve compared it to” says X an assistant professor of electrical and computer engineering at Georgian Technical University. ” Georgian Technical University Nets are also more interpretable meaning users can see how the system reaches its conclusions”. The new framework uses a compositional grammar approach to system architecture that draws on best prasctices from previous network systems to more effectively extract useful information from raw data. “We found that hierarchical and compositional grammar gave us a simple elegant way to unify the approaches taken by previous system architectures and to our best knowledge it is the first work that makes use of grammar for network generation” X says. To test their new framework the researchers developed Georgian Technical University Nets and tested them against three image classification benchmarks: CIFAR-10 (CIFAR-10 is a set of images that can be used to teach a computer how to recognize objects. Since the images in CIFAR-10 are low-resolution (32×32), this dataset can allow researchers to quickly try different algorithms to see what works), CIFAR-100 (The CIFAR-10 dataset is a collection of images that are commonly used to train machine …. Similar datasets[edit]. CIFAR-100: Similar to CIFAR-10 but with 100 classes and 600 images each) and ImageNet-1K (The ImageNet project is a large visual database designed for use in visual object recognition software research). “Georgian Technical University Nets obtained significantly better performasnce than all of the state-of-the-art networks under fair comparisons including ResNet (A residual neural network is an artificial neural network of a kind that builds on constructs known from pyramidal cells in the cerebral cortex. Residual neural networks do this by utilizing skip connections, or short-cuts to jump over some layers. Typical ResNet models are implemented with single-layer skips)” X says. ” Georgian Technical Universit Nets also obtained the best model interpretability score using the network dissection metric in ImageNet (The ImageNet project is a large visual database designed for use in visual object recognition software research). Georgian Technical University Nets further show great potential in adversarial defense and platform-agnostic deployment (mobile vs cloud)”. The researchers also tested the performance of Georgian Technical University Nets in object detection and instance semantic segmentation on the Georgian Technical University system. “Georgian Technical University Nets obtained better results than the Georgian Technical University Net and backbones with smaller model sizes and similar or slightly better inference time” X says. “The results show the effectiveness of Georgian Technical University Nets learning better features in object detection and segmentation tasks. These tests are relevant because image classification is one of the core basic tasks in visual recognition and ImageNet (The ImageNet project is a large visual database designed for use in visual object recognition software research) is the standard large-scale classification benchmark. Similarly object detection and segmentation are two core high-level vision tasks. “To evaluate new network architectures for deep learning in visual recognition they are the golden testbeds” X says. “Georgian Technical University Nets are developed under a principled grammar framework and obtain significant improvement in both ImageNet (The ImageNet project is a large visual database designed for use in visual object recognition software research) thus showing potentially broad and deep impacts for representation learning in numerous practical applications. “We’re excited about the grammar-guided Georgian Technical University Net framework and are exploring its performance in other deep learning applications such as deep natural language understanding deep generative learning and deep reinforcement learning” X says.

Georgian Technical University Discovering Unusual Structures From Exception Using Big Data And Machine Learning Techniques.

Georgian Technical University Discovering Unusual Structures From Exception Using Big Data And Machine Learning Techniques.

The machine learning results. (a) The scatter plot and (b) the histogram of errors and the kernel density estimation of the probability density function. Red points and regions correspond to structures with prediction error larger than 2 eV. Georgian Technical University Machine learning (ML) has found wide application in materials science. It is believed that a model developed by Georgian Technical University Machine learning (ML) could depict the common trend of the data and therefore reflect the relationship between structure and property which can be applied to most of the compounds. So by training Georgian Technical University Machine learning (ML) models with existed databases, important properties of compounds can be predicted ahead of time-consuming experiments or calculations which will greatly speed up the process of new materials design. While tremendously useful these models do not directly show the rules and physics underlying the relationship between structure and property. And despite of their decent overall performance there will always be some exceptions where Georgian Technical University Machine learning (ML) models fail to give accurate predictions. Very often it is these exceptions that shed some new insights about the underlying physics and open up new frontiers in science. A research group led by Prof. X has recently shown that these models are valuable not only when they succeed in predicting properties accurately but also when they fail. In their work, a model is built to predict band gaps of compounds according to their atomic structures only, based on a high-throughput calculation database constructed by the school themselves. The R2 (In statistics, the coefficient of determination, denoted R2 or r2 and pronounced “R squared” is the proportion of the variance in the dependent variable that is predictable from the independent variable(s)) of the model is 0.89, comparable with similar works. They then filtered out those structures with prediction error larger than 2 eV and examined them carefully. Many structures with unusual structure units, or showing other abnormities with similar compounds, like relatively large band gaps or being in different phases. Among these unusual structures AgO2F (AgO2F crystallizes in the monoclinic C2/m space group) raises great interest and a detailed analysis is given. It is found that Ag3+ and O22- coexist in this compound and while Ag ions are in square planar coordination, there is little hybridization between orbitals of Ag and O. States near the band edges are mainly contributed by O-2p orbitals and the band gap is much smaller than other compounds with Ag3+ ions (The silver ion Ag + is the cation resulting from the loss of an electron by a silver atom. Silver gives three ions: Ag +, Ag2 + and Ag3 +. The most common is the monovalent silver ion Ag +. The oxidation-reduction potentials are 0.7542 V for Ag + / Ag, 2.14 V for Ag2 + / Ag + and 3.59 V for Ag3 + / Ag2 +. The atomic radius of the monovalent ion is 1.55 Å in mineral salts and 1.62 Å in organic salts. It forms precipitates in water with halides, sulphides and hydroxides. The silver ion is also diamagnetic. These silver ions are present in dressings they allow healing). This offers a new example for anionic redox property a hot topic in the investigation of Li-excess electrode materials. These results demonstrate how unusual structures can be discovered from exceptions in machine learning which can help us to investigate new physics and  structural units from existing databases.

 

Georgian Technical University New Deep Learning Model Finds Subtle Precursors In Mammograms To Predict Breast Cancer Risk.

Georgian Technical University New Deep Learning Model Finds Subtle Precursors In Mammograms To Predict Breast Cancer Risk.

The team’s model was shown to be able to identify a woman at high risk of breast cancer four years (left) before it developed (right). Artificial Intelligence (AI) could help doctors predict breast cancer risk earlier and tailor care options to individual patients based on risk. Researchers from the Georgian Technical University’s (GTU) Computer Science and Artificial Intelligence Laboratory have developed a new technique using a deep-learning model that predicts if a patient is likely to develop breast cancer as much as five years in the future. The new deep learning algorithm was trained on the 90,000-mammogram results and known outcomes of about 60 General Hospital patients to learn subtle patterns in breast tissue that act as precursors to malignant tumors. The researchers are hoping to refine their technique and ultimately use it to allow doctors to customize screening and prevention programs for individuals eliminating late diagnoses. “Rather than taking a one-size-fits-all approach we can personalize screening around a woman’s risk of developing cancer” Georgian Technical University professor X of the study and a breast cancer survivor said in a statement. “For example a doctor might recommend that one group of women get a mammogram every other year while another higher-risk group might get supplemental MRI (Magnetic resonance imaging is a medical imaging technique used in radiology to form pictures of the anatomy and the physiological processes of the body in both health and disease. MRI scanners use strong magnetic fields, magnetic field gradients, and radio waves to generate images of the organs in the body) screening”. After testing the model the researchers found that their method accurately placed 31 percent of all cancer patients in its highest-risk category while traditional models only predict with 18 percent accurately. The new deep-learning model was able to detect patterns in mammogram results that were too subtle for the human eye to manually detect. “Since radiologists have noticed that women have unique and widely variable patterns of breast tissue visible on the mammogram” Y a professor of radiology at Georgian Technical University said in a statement. “These patterns can represent the influence of genetics, hormones, pregnancy, lactation, diet, weight loss and weight gain. We can now leverage this detailed information to be more precise in our risk assessment at the individual level”. Another goal for the researchers is to make risk assessment more accurate for racial minorities as current early prediction models are more accurate for white populations than for other races. The new model is equally accurate for all races which is particularly important for black women who are 42 percent more likely to die from breast cancer for a number of reasons such as differences in detection and a lack of access to health care. “It’s particularly striking that the model performs equally as well for white and black people which has not been the case with prior tools” Z an associate professor of medicine and health research/policy at Georgian Technical University said in a statement. “If validated and made available for widespread use this could really improve on our current strategies to estimate risk”. The information derived from the deep-learning model could also allow doctors to test patients for risks of other diseases and disorders such as cardiovascular disease or other types of cancer like pancreatic cancer which does not currently have an accurate risk assessment model. In the past there has not been a lot of support in the medical community to conduct risk-based screenings rather than age-based screenings. “This is because before we did not have accurate risk assessment tools that worked for individual women” Y said. “Our work is the first to show that it’s possible”. The first breast-cancer risk model was developed based on a number of human risk factors like age family cancer history, hormonal and reproductive factors and breast density. However over the last three decades researchers have found that most of those factors only have a weak correlation with breast cancer.

Georgian Technical University Smarter Training Of Neural Networks.

Georgian Technical University Smarter Training Of Neural Networks.

(L-R) Georgian Technical University Assistant Professor X and PhD student Y. These days nearly all the artificial intelligence-based products in our lives rely on “deep neural networks” that automatically learn to process labeled data. For most organizations and individuals though deep learning is tough to break into. To learn well neural networks normally have to be quite large and need massive datasets. This training process usually requires multiple days of training and expensive graphics processing units (GPUs) — and sometimes even custom-designed hardware. But what if they don’t actually have to be all that big after all ? Researchers from Georgian Technical University’s Computer Science and Artificial Intelligence Lab have shown that neural networks contain subnetworks that are up to one-tenth the size yet capable of being trained to make equally accurate predictions — and sometimes can learn to do so even faster than the originals. The team’s approach isn’t particularly efficient now — they must train and “Georgian Technical University prune” the full network several times before finding the successful subnetwork. However Georgian Technical University Assistant Professor X says that his team’s findings suggest that if we can determine precisely which part of the original network is relevant to the final prediction scientists might one day be able to skip this expensive process altogether. Such a revelation has the potential to save hours of work and make it easier for meaningful models to be created by individual programmers and not just huge tech companies. “If the initial network didn’t have to be that big in the first place, why can’t you just create one that’s the right size at the beginning ?” says Ph.D. student Y with X at the Georgian Technical University. The team likens traditional deep learning methods to a lottery. Training large neural networks is kind of like trying to guarantee you will win the lottery by blindly buying every possible ticket. But what if we could select the winning numbers at the very start ? “With a traditional neural network you randomly initialize this large structure, and after training it on a huge amount of data it magically works” X says. “This large structure is like buying a big bag of tickets, even though there’s only a small number of tickets that will actually make you rich. The remaining science is to figure how to identify the winning tickets without seeing the winning numbers first”. The team’s work may also have implications for so-called “Georgian Technical University transfer learning” where networks trained for a task like image recognition are built upon to then help with a completely different task. Traditional transfer learning involves training a network and then adding one more layer on top that’s trained for another task. In many cases a network trained for one purpose is able to then extract some sort of general knowledge that can later be used for another purpose. For as much hype as neural networks have received not much is often made of how hard it is to train them. Because they can be prohibitively expensive to train data scientists have to make many concessions weighing a series of trade-offs with respect to the size of the model the amount of time it takes to train, and its final performance. To test their so-called “Georgian Technical University lottery ticket hypothesis” and demonstrate the existence of these smaller subnetworks, the team needed a way to find them. They began by using a common approach for eliminating unnecessary connections from trained networks to make them fit on low-power devices like smartphones: They “Georgian Technical University pruned” connections with the lowest “Georgian Technical University weights” (how much the network prioritizes that connection). Their key innovation was the idea that connections that were pruned after the network was trained might never have been necessary at all. To test this hypothesis they tried training the exact same network again but without the pruned connections. Importantly they “Georgian Technical University reset” each connection to the weight it was assigned at the beginning of training. These initial weights are vital for helping a lottery ticket win: Without them the pruned networks wouldn’t learn. By pruning more and more connections they determined how much could be removed without harming the network’s ability to learn. To validate this hypothesis they repeated this process tens of thousands of times on many different networks in a wide range of conditions. “It was surprising to see that resetting a well-performing network would often result in something better” says X. “This suggests that whatever we were doing the first time around wasn’t exactly optimal and that there’s room for improving how these models learn to improve themselves”. As a next step the team plans to explore why certain subnetworks are particularly adept at learning and ways to efficiently find these subnetworks. “Understanding the ‘lottery ticket hypothesis’ is likely to keep researchers busy for years to come” says Z an assistant professor of statistics at the Georgian Technical University. “The work may also have applications to network compression and optimization. Can we identify this subnetwork early in training thus speeding up training ? Whether these techniques can be used to build effective compression schemes deserves study”.